VRML97 logo

4 Concepts

--- VRML separator bar ----

4.1 Introduction and table of contents

4.1.1 Introduction

This clause describes key concepts in ISO/IEC 14772. This includes how nodes are combined into scene graphs, how nodes receive and generate events, how to create node types using prototypes, how to add node types to VRML and export them for use by others, how to incorporate scripts into a VRML file, how nodes and related functionality are organized into profiles, and various general topics.

The complete set of VRML functionality is subdivided into several components. These components are used to aggregate similar concepts for easy integration in implementations.

4.1.2 Table of contents

See Table 4.1 for the table of contents for this clause.

Table 4.1 -- Table of contents, Concepts

4.1 Introduction and table of contents
  4.1.1 Introduction
  4.1.2 Table of contents
  4.1.3 Conventions used

4.2 Overview
  4.2.1 The structure of a VRML file
  4.2.2 Header
  4.2.3 Profiles
  4.2.4 Encodings
  4.2.5 Validating versus non-validating parsers
  4.2.6 Basic concepts
  4.2.7 File compression

4.3 Architecture
  4.3.1 Overview
  4.3.2 Object model
  4.3.3 Prototype creation
  4.3.4 Raw scene graph
  4.3.5 Event routing
  4.3.6 Managers
  4.3.7 Page integration
  4.3.8 Generating VRML files
  4.3.9 Presentation and interaction

4.4 Abstract syntax
  4.4.1 Description mechanism
  4.4.2 Statements
  4.4.3 Node statement syntax
  4.4.4 Field statement syntax
  4.4.5 PROTO statement syntax
  4.4.6 IS statement syntax
  4.4.7 EXTERNPROTO statement syntax
  4.4.8 USE statement syntax
  4.4.9 ROUTE statement syntax

4.5 Scene graph structure
  4.5.1 Overview
  4.5.2 Root nodes
  4.5.3 Scene graph hierarchy
  4.5.4 Descendant and ancestor nodes
  4.5.5 Transformation hierarchy
  4.5.6 Standard units and coordinate system
  4.5.7 Run-time name scope
  4.5.8 Top-level interface
  4.5.9 Behaviour graph
  4.5.10 Media graph

4.6 VRML and the World Wide Web
  4.6.1 File extension and MIME type
  4.6.2 URLs
  4.6.3 Relative URLs
  4.6.4 Scripting language protocols

4.7 Node semantics
  4.7.1 Introduction
  4.7.2 DEF/USE semantics
  4.7.3 Shapes and geometry
  4.7.4 Bounding boxes
  4.7.5 Grouping and children nodes

  4.7.6 Light sources
  4.7.7 Sensor nodes
  4.7.8 Interpolator nodes
  4.7.9 Time-dependent nodes
  4.7.10 Bindable children nodes
  4.7.11 Surfaces, images and textures

4.8 Field semantics

4.9 Prototype semantics
  4.9.1 Introduction
  4.9.2 PROTO definition semantics
  4.9.3 Scoping rules
  4.9.4 Prototype interface declaration semantics
  4.9.5 Derivation and Inheritance
  4.9.6 Private fields and functions
  4.9.7 Prototype initialization
  4.9.8 Anonymous prototypes

4.10 External prototype semantics
  4.10.1 Introduction
  4.10.2 EXTERNPROTO interface semantics
  4.10.3 EXTERNPROTO URL semantics

4.11 Event processing
  4.11.1 Introduction
  4.11.2 Route semantics
  4.11.3 Execution model
  4.11.4 Loops
  4.11.5 Fan-in and fan-out

4.12 Time
  4.12.1 Introduction
  4.12.2 Time origin
  4.12.3 Discrete and continuous changes

4.13 Authoring
  4.13.1 Introduction
  4.13.2 Script execution
  4.13.3 Initialize() and shutdown()
  4.13.4 eventsProcessed()
  4.13.5 Scripts with direct outputs
  4.13.6 Asynchronous scripts
  4.13.7 Script languages
  4.13.8 Event handling
  4.13.9 Accessing fields and events
  4.13.10 Scene authoring interface

4.14 Navigation
  4.14.1 Introduction
  4.14.2 Navigation paradigms
  4.14.3 Viewing model
  4.14.4 Collision detection and terrain following

4.15 Lighting model
  4.15.1 Introduction
  4.15.2 Lighting 'off'
  4.15.3 Lighting 'on'
  4.15.4 Lighting equations
  4.15.5 References

4.1.3 Conventions used

The following conventions are used throughout this part of ISO/IEC 14772:

Italics are used for event and field names, and are also used when new terms are introduced and equation variables are referenced.

A fixed-space font is used for URL addresses and source code examples.

Node type names are appropriately capitalized (e.g., "The Billboard node is a grouping node..."). However, the concept of the node is often referred to in lower case in order to refer to the semantics of the node, not the node itself (e.g., "To rotate the billboard...").

The form "0xhh" expresses a byte as a hexadecimal number representing the bit configuration for that byte.

While X3D supports a variety of encodings, this clause shows examples in the UTF-8 encoding. Nodes are shown as a capitalized name followed by an open brace, followed by field initializations, ending with a closing brace. A named node is preceded by the keyword DEF and an identifier.

X3D uses a component hierarchy, with every component in the system deriving from one or more parent interfaces. Some components are intended to be instantiated while others are abstract, intended to be derived by other components. Abstract components append Node to the name, while instantiated components do not. For instance, SurfaceNode is an abstract node while ImageSurface is intended to be instantiated.

Examples appear in bold,fixed-space font. In some examples prototypes are used to introduce new object types. Unless otherwise noted, prototypes are specified using the keyword PROTO, followed by a name and an opening bracket. The body of the prototype follows and is ended with a closing bracket.

Throughout this part of ISO/IEC 14772, references are denoted using the "x.[ABCD]" notation, where "x" denotes which clause or annex the reference is described in and "[ABCD]" is an abbreviation of the reference title. For example, 2.[ABCD] refers to a reference described in clause 2 and B.[ABCD] refers to a reference described in annex B.

4.2 Overview

4.2.1 The structure of a VRML file

A VRML file consists of the following major functional components: the header, the scene graph, the prototypes, and event routing. The contents of this file are processed for presentation and interaction by a program known as a browser.

4.2.2 Header

For easy identification of VRML files, every VRML file shall begin with:

#VRML V3.0 <encoding type> [optional comment] <line terminator>

The header is a single line of UTF-8 text identifying the file as a VRML file and identifying the encoding type and profile of the file. It may also contain additional semantic information. There shall be exactly one space separating "#VRML" from "V3.0" , "V3.0" from "<encoding type>", and <encoding type> from <profile>. Also, the "<profile>" shall be followed by a linefeed (0x0a) or carriage-return (0x0d) character, or by one or more space (0x20) or tab (0x09) characters followed by any other characters, which are treated as a comment, and terminated by a linefeed or carriage-return character.

The <encoding type> is  any of the authorized values defined in other parts of ISO/IEC 14772.

Any characters after the <encoding type> on the first line may be ignored by a browser. The header line ends at the occurrence of a <line terminator>. A <line terminator> is a linefeed character (0x0a) or a carriage-return character (0x0d) .

4.2.3 Profiles

ISO/IEC 14772 supports the concept of profiles. A profile is a named collection of functionality and requirements which shall be supported in order for an implementation to conform to that profile. Two profiles are defined in this part of ISO/IEC 14772. The full functionality of VRML is described in ISO/IEC 14772-1 clauses 1 through 6. General conformance criteria is described in Clause 7. Clause 8 defines the conformance and minimum support requirements for the Core profile. Clause 9 defines the conformance and minimum support requirements for the Base profile. Additional profiles may be defined in other parts of ISO/IEC 14772 or through registration. Such profiles shall incorporate the entirety of the Core profile.

The set of  VRML features is subdivided into different functionality blocks called components. A given component can support functionality at different levels. An author specifies the required functionality level for the content. A VRML browser can use profile information to load only required components and or dynamically load new program modules, if contents require additional component levels not currently available on the client system. Table 4.2 lists the components of VRML.

 Table 4.2:  VRML components

component Description applicable profiles 
rendering the supported set of rendering attributes Core, Base 
geometry the supported geometric primitives Core, Base 
navigation the features of the navigation system Base 
media.texture  the allowable image media types Core, Base with MIME types (e.g., image/gif  or image/png)
scripting Script node support Base with ECMASCcript  & Java
language language features used Core, Base

Profile indication is done by specially formatted comments added after the standard VRML file header.

#VRML V3.0 utf8
#VRML profile=core

The example header above indicates that a given UTF-8 encoded VRML content file is compatible with the core profile defined in 8.[Core profile]. Only features available in the profile are specified or used in the content.

#VRML V3.0 binary
#VRML profile=core
#VRML profile:rendering=base
#VRML profile:geometry=core

The second example above indicates binary encoded content that is written with respect to the full VRML lighting and rendering model but with the Core geometry profile.

#VRML V3.0 XML
#VRML profile=core
#VRML profile:rendering=core
#VRML profile:geometry=core
#VRML profile:media.texture=image/gif image/jpeg
#VRML profile:scripting=none

The third example header indicates that the content is XML encoded and explicitly lists level values for several components.

#VRML V3.0 utf8
#VRML profile=base

The fourth example header indicates UTF-8 content that conforms to the Base profile.

The supported browsers implementation profile(s) can be queried using an API call. Viewers should support an option allowing specification of different set of urls corresponding to the content written for different profiles. This allows players supporting higher level profiles to pick the content version for the higher level profile.

Browsers that wish to support VRML 97 shall also recognize the following header:

#VRML V2.0 utf8

Such support is optional.

4.2.4  Encodings

This part of ISO/IEC 14772 defines VRML in an abstract form. To use VRML, this abstract form is encoded according to the definitions in other parts of this standard. In particular, the following encodings are supported:

    ISO/IEC 14772-3:  UTF-8 encoding
    ISO/IEC 14772-4:  XML encoding (X3D)
    ISO/IEC 14772-5:  Binary encoding

Other encodings may be defined as required. All such encodings shall support exactly the concepts and semantics defined in this part of ISO/IEC 14772.

4.2.5  Validating versus non-validating parsers

To balance the need for fast and efficient parsers with the need for testable compliance, VRML supports the notion of validating as well as non-validating parsers. A validating parser for a given encoding is required to generate user visible errors for all syntactic and semantic errors which can occur in that format. If producing a raw scene graph for input to the engine (as opposed to an offline file validator), the scene produced shall omit any nodes or constructs which are in error. This may sometimes cause an empty scene to be produced.

A non-validating parser shall accept all syntactically and semantically correct files (as indicated by a validating parser) and produce compliant behavior. But incorrect files may be processed with a wide latitude of acceptable results. The only requirement is that the resulting scene may not cause the runtime to enter an unrecoverable failure state (crashing or hanging). It is expected that most production runtime engines will contain non-validating parsers for efficiency, and that validating parsers will be used in authoring tools and offline file validators.

4.2.6  Basic concepts

VRML is an architecture for describing 2D and 3D multimedia applications. It contains both declarative elements (nodes and field connections) and procedural elements (executable code). The basic structure of VRML is the scene graph, which describes the visible and behavioral elements and their relationship to one another. Central to VRML is the runtime environment, which represents the current state of the scene graph, renders it as needed, and performs changes to it in response to instructions from the behavioral system.

Typically, the author of VRML content begins by writing a VRML file or stream containing the initial state of the application along with the declarative and procedural statements which describe its behavior over time. A VRML file can be purely declarative, with behaviors described in terms of nodes and connections between their fields. Or it can be purely procedural, with the visual and behavioral elements built dynamically as part of the application execution. But often it is a combination of both.

The VRML file or stream, heretofore referred to simply as a file must be viewed as a mere snapshot of the application. While the file describes a particular structure at a particular time (typically when the application starts), this structure can change rapidly as the application proceeds. A file is a complete representation of the runtime environment at any given time. In other words, anything that can be represented by the runtime can be written to a file. While not required, a VRML implementation that has the ability to write a VMRL file can do so at any time to save an accurate and complete representation of the runtime environment at that instant.

The runtime system makes available a set of zero or more built-in objects. An object contains some set of functionality useful in the VRML environment. It may represent a data structure, such as a Vec3f, a rendered primitive, such as a Cylinder, or some behavioral function, such as a TouchSensor. Each object contains zero or more properties, which define storage for data values or functions for operating on that data. For instance, a property might contain the height or radius of the Cylinder, or whether or not the TouchSensor is enabled.

Objects are instantiated to be used. This is done automatically by declaring them in the file or, at runtime, using procedural code. The author can also create new object types using the prototyping mechanism. These become part of the runtime system and behave exactly like built-in objects. These can be created declaratively by including a prototype in a file, or by including an object written in a supported native language, such as Java or C++.

4.2.7  File compression

VRML files regardless of encoding can be compressed using gzip. VRML browsers shall automatically recognize the compression and decompress as necessary.

4.3 Architecture

4.3.1  Overview

[Need to define architecture to insure that this is a design architecture and implementations may differ provided the semantic requirements are met.]

VRML comprises a complete architecture rather than simply a language. This includes a file format, with binary, utf8 and XML encodings. But it also specifies the environment into which those files are injected, and mechanisms used to display, interact, modify and extend the scene. In fact, VRML is more than a single scene. It is a system comprising potentially multiple scenes along with various pieces of script logic and native code used to manipulate and control them. Order of evaluation of each scene element and execution order of every script is well defined and is controlled by the VRML runtime engine.

VRML has components to control the creation and management of scenes, rendering and behavior, and media asset management. There are also components to control the loading and incorporation of authored extensions, which can be written in VRML, or a supported external language such as C++, Java or ECMAScript.

The basic VRML architecture is shown in Figure 4.1:

Figure 4.1:  VRML Architecture

At the core of the VRML architecture is a Runtime Engine which presents various API elements and the object model to a set of objects present in the system. During normal operation, a file is parsed into a Raw Scene Graph and passed on to the core, where its objects are instantiated and the runtime scene graph is built. Objects can be one of three types: Built-in, Author defined, or Native. Objects use the set of available Managers to obtain platform services such as event handling, loading of assets and playing of media. Objects use the Rendering Layer to compose intermediate or final images for display. A Page integration component is used to interface VRML to an external environment, such as an HTML or XML page.

4.3.2  Object model

The VRML system contains an object hierarchy. Every object derives some of its functionality from its parent objects, and then extends or modifies it. At the base of the hierarchy is Object. The two main classes of object derived from this are Node and Field. Nodes contain, among other things, a render method which gets called as part of the render traversal. Objects derived from Field do not have a render method and are typically used to hold data values.

Every object can contain a set of named data properties and function properties. Data properties hold either a single data value of a given type or an array of zero or more such values. These data properties are referred to as fields. Function properties hold a reference to a sequence of executable code written in the built-in scripting language, typically used to operate on data in the fields or to effect some change in the system state. All function properties are of type Function, which is derived directly from Object.

Each object in the system has a well defined type and parent hierarchy (from which objects this object derives some or all of its functionality). Functions are available to test whether or not one object matches another object's type and whether or not one object's type appears in the parent hierarchy of another object. The type of a data property describes which objects it is allowed to contain as data. For instance, the translation field of a Transform node is of type Vec3f. Only objects of this type or derived from this type may appear as data values of this property. The radius property of a Cylinder node, on the other hand, is of type Float. Float is one of the basic numeric types. It is derived from Field, but it does not contain any properties other than its implicit value. There are several other basic types, which are described below:

The Proto object defines the implementation of each object in the system. It contains information about the name and type of each property of the object. For some objects it also contains a list of prototypes and routes defined in its scope. An object instance is always created through a function call to its Proto object. Each object has a reference to the Proto used to create it. It is this Proto that uniquely identifies the object's type.

The Field object not only serves as the root of all field objects, but Also allows for a syntactic differentiation in the various file formats. For instance, in the UTF-8 encoding, fields are initialized by listing the values of the field in order. This is contrasted with the initialization of nodes, which require the node name, followed by field names and initialization values, enclosed in braces. This distinction in initialization style makes the UTF-8 encoding simple to understand and to author. The separation of Nodes and Fields into separate hierarchies allows this syntactic variation to be encoded in these root objects.

This allows VRML syntactic constructs to be created, such as:

    MyNode { myField 1 2 3 }
where MyNode is a node (not derived from Field). MyNode contains a property called myField which is of type Vec3f, derived from Field. Furthermore, Vec3f contains 3 properties, x, y, and z, each of type Float. Without the special semantics of Field, the above would have to be written as:
    MyNode {
        myField Vec3f { x 1 y 2 z 3 }
    }
It is important to note that the VRML semantic rules for file parsing (and the raw scene graph) allow both forms for field types. But the first form allows a more compact representation when it is expected that a value for each property will be needed. Note also that, although the UTF-8 encoding is shown, the same semantic rules apply to the binary and XML encodings as well as all other encodings which may be defined in the future.

4.3.3  Prototype creation

Objects come from three sources. Built-in objects are objects that are available to the author with any additional description. Different profiles have a different set of built-in objects. Author defined objects are objects that the author creates using a PROTO or EXTERNPROTO statement. Native objects are objects added to the system using the EXTERNPROTO statement, with the implementation written in a supported extension language, such as C++ or Java. Some profiles do not support this capability.

4.3.4  Raw scene graph

Every VRML presentation begins with a file or stream of VRML content. In reality, VRML could be linked with a custom application which generated an entire scene programmatically. But this is a special case and is a subset of the general flow through the VRML runtime engine. A description of the flow of content through the system is given below. This is a conceptual description. A compliant VRML runtime engine shall behave as though it were operating as shown, although the actual implementation may be quite different.

Content typically flows into an appropriate parser. At different profiles and levels, source could be provided as a UTF-8 encoding, a binary encoding, or an XML encoding. Regardless of the incoming format, all streams are converted into a Raw Scene Graph. This structure represents all the nodes, fields and other objects in the content, as well as field initialization values. It also contains a description of all object prototypes, external prototype references in the stream, and route statements.

At the top level of the raw scene graph are all the nodes, top level fields and functions, prototypes and routes contained in the file. It is important to note that VRML allows fields and functions at the top level in addition to the traditional elements. These are used to provide an interface to an external environment, such as a HTML page. They also provide the object interface when a file or stream is used as the contents of an external prototype.

Each Raw Node contains a list of the fields initialized within its context. Each field is a Raw Field entry containing the name, type (if given) and data value(s) for that field. Each data value contains either a number, a string, a raw node or a raw field (to represent an explicitly typed field value).

The prototypes are extracted from the top level of the raw scene graph and used to populate the database of object prototypes accessible by this scene. The raw scene graph is then sent through a build traversal. During this traversal, each object is built, using the database of object prototypes. Objects built from authored prototypes are able to execute code (written either in the built-in scripting language or an external language) to aid in the creation of the object. But no other object is accessible at this time and no events may be sent from fields of the instance. During the creation of a node, its publicly accessible DEF name is added to the root DEF dictionary, if appropriate.

Next all routes in the file or stream are established. Since the scene is traversed and the DEF dictionary is built in a previous pass, the from and to node names of the route can be forward references. This has the advantage of greater authorability, but it also adds the restriction that DEF names in a stream may not be duplicated. For backward compatibility, VRML files with duplicate names are preprocessed to make the names unique. For completeness, a USE of a node may come before its DEF.

Next, each field in the scene is initialized. Initial events are sent to all non default fields of all objects. Since the scene graph structure is achieved through the use of node fields, this step constructs the scene hierarchy as well. Events are fired using inorder traversal. The first node encountered enumerates its fields. If a field is a node, that node is first traversed which initialized all nodes in that branch of the tree. Then an event is sent to that node field with its initial value.

After a given node has had all its fields initialized, the initialize( ) method of that node is called. This allows authors to add initialization logic to prototyped objects and be assured that the node is fully initialized when called. The initialize( ) method of converted VRML 97 Script nodes would be called at this time as well.

The result of the above steps produces a root scene which is delivered to a Scene Manager created for this file. The scene manager is then used to render and perform behavioral processing, either implicitly or under author control.

4.3.5  Event routing

Events can be injected into the VRML environment from several sources. Typically, nodes register interest in specific event types, such as timing, keyboard or pointing device events. When the given event occurs, the node receives notification and can potentially change the value of one or more of its fields. VRML has a mechanism called routing, that allows an author to declaratively connect fields together. When the value of a source routed field changes, the corresponding destination routed field receives notification and can process a response to that change. This processing can change the state of the node, generate additional events, or change the structure of the scene graph.

Events conceptually follow an event listener model of processing. When a route is established, the source field is assigned a listener, which is the node and field where the route is destined. When a field value changes, it notifies each listener in the order in which it was assigned. This implies a depth first order of event distribution. The routes along a single event chain execute in sequence, followed by the next chain and so on. This sequence is controlled by the Event Manager and can be changed by installing authored event handlers into the Event Manager.

Arbitrary, author-defined event processing occurs through use of the VRML Scene Authoring Interface (SAI) defined in ISO/IEC 14772-2. The SAI provides access to the scene graph and the events which effect it either internally via the Script node or from an external program. An event received by a Script node causes the execution of a function within a script which has the ability to send events through the normal event routing mechanism, or bypass this mechanism and send events directly to any node to which the Script node has a reference. Scripts can also dynamically add or delete routes and thereby changing the event-routing topology. Similar capabilities exist when access is from an external program.

Nodes created through the VRML prototyping mechanism give authors an opportunity to create custom processing of incoming events. Events coming into a prototyped node through an interface field can be routed to internal nodes for processing, or routed to other interface fields for propagation outside the node. An author can also add programmatic processing logic to an interface field using the internal scripting language. For more information on event processing, see section ???.

4.3.6  Managers [HOW DOES THIS MAP TO THE SAI?]

VRML contains a System object with references to a set of managers. Each manager provides a set of APIs to control some aspect of the system. The Event Manager provides access to incoming system events, originated by user input or environmental events. The Load Manager facilitates the loading of Blendo files and native node implementations. The Media Manager provides the ability to load, control and play audio, image and video media assets. The Render Manager allows the creation and management of objects used to render scenes. The Scene Manager controls the scene graph. The Surface Manager allows the creation and management of surfaces onto which scene elements and other assets may be composited. The Thread Manager gives authors the ability to spawn and control threads and to communicate between them.

4.3.7  Page Integration

VRML includes a Page Integration component, which allows the VRML runtime environment to interact with page oriented environments such as HTML browsers. This mechanism is used by including an element, VRMLScene, in the file which introduces a VRML file to the page environment. After introduction, the top level fields in the VRML file are made available to the page through the X3DParam (for communication from the page to the scene) and VRMLObserver (for communication from the scene to the page) elements. Using the Scene Authoring Interface via the DOM API to access these elements, runtime control of the scene can be provided by procedural elements on the page. For more information on this interface, see ISO/IEC 14772-2:  Annex B.

4.3.8 Generating VRML files

A generator is a human or computerized creator of VRML files. It is the responsibility of the generator to ensure the correctness of the VRML file and the availability of supporting assets (e.g., images, audio clips, other VRML files) referenced therein.

4.3.9 Presentation and interaction

The interpretation, execution, and presentation of VRML files occurs using a mechanism known as a VRML browser, which displays the shapes and sounds in the scene graph. This presentation is known as a virtual world and is navigated in the browser by a human or mechanical entity, known as a user. The world is displayed as if experienced from a particular location; that position and orientation in the world is known as the viewer. The browser may provide navigation paradigms (such as walking or flying) that enable the user to move the viewer through the virtual world.

In addition to navigation, the browser provides a limited mechanism allowing the user to interact with the world through sensor nodes in the scene graph hierarchy. Sensors respond to user interaction with geometric objects in the world, the movement of the user through the world, or the passage of time.  Additionally, the VRML Scene Authoring Interface (SAI) provides mechanisms for getting user input, and for getting and setting the current viewpoint. To provide navigation capabilities, a viewer may use the SAI to provide the user with the ability to navigate. Additionally, authors may use scripting or programming languages with bindings to the SAI to implement their own navigation algorithms. Other profiles may specify navigation capabilities as a requirement of the viewer; implementations of such viewers will typically do so by making use of the SAI.

The visual presentation of geometric objects in a VRML world follows a conceptual model designed to resemble the physical characteristics of light. The VRML lighting model describes how appearance properties and lights in the world are combined to produce displayed colours (see 4.14, Lighting Model, for details).

--- VRML separator bar ---

4.4 Abstract syntax

4.4.1 Description mechanism

This section describes the abstract syntax of VRML files. This abstract syntax exists to define the semantics of VRML but is not intended to actually be realized. A formal description of the syntax may be found in annex A, Grammar definition. The semantics of VRML in terms are presented in this part of ISO/IEC 14772. The  standard encodings are defined in other parts of ISO/IEC 14772. Such encodings shall describe how to map the abstract syntax descriptions to and from the corresponding encoding elements.

The description mechanism consists of a series of categories. Each category name is enclosed in angle brackets. The categories are listed in a particular sequence as required by the statement in which they are embedded. The statements consist of a sequence of categories each representing a semantic element within the statement. Optional elements are enclosed in square brackets. Repeating elements are represented by an ellipsis. Elements may be grouped using parentheses.

4.4.2 Statements

After the required header, a VRML file may contain any combination of the following:

  1. Any number of PROTO or EXTERNPROTO statements (see 4.8, Prototype semantics);
  2. Any number of root node statements (see 4.4.1, Root nodes);
  3. Any number of USE statements (see 4.6.2, DEF/USE semantics);
  4. Any number of ROUTE statements (see 4.10.2, Route semantics).

4.4.3 Node statement syntax

A node statement consists of an optional "DEF" name for the node followed by the node's type and then the body of the node. See A.3, Nodes, for details on node abstract grammar rules.

    [<DEF name>] <nodeType> ( <body> )

A node's body consists of any number of field statements, IS statements, ROUTE statements, PROTO statements or EXTERNPROTO statements, in any order.

See 4.6.2, DEF/USE, semantics for more details on node naming. See 4.3.4, Field statement syntax, for a description of field statement syntax and 4.7, Field, eventIn, and eventOut semantics, for a description of field statement semantics. See 4.6, Node semantics, for a description of node statement semantics.

4.4.4 Field statement syntax

A field statement consists of the name of the field followed by the field's value(s). The following illustrates the syntax for a field statement:

    <fieldType> <name> <fieldValue>

The fieldType indicates whether the field is an eventIn, a field, an exposedField, or an eventOut. The name provides a name for the field. The fieldValue depends on the data type of the field. Some data types have simple values while other data types have multiple values. The fieldValue specifies the initial value for the field.

See A.4, Fields, for details on field statement grammar rules.

Each node type defines the names and types of the fields that each node of that type contains. The same field name may be used by multiple node types. See 5, Field and event reference, for the definition and syntax of specific field types.

See 4.7, Field, eventIn, and eventOut semantics, for a description of field statement semantics.

4.4.5 PROTO statement syntax

A PROTO statement consists of the prototype name, prototype interface declaration, and prototype definition:

    <protoName> <protoDeclaration> <protoDefinition>

See A.2, General, for details on prototype statement grammar rules.

A prototype interface declaration consists of field declarations (see 4.7, Field, eventIn, and eventOut semantics). 

Field declarations consist of a fieldType, a fieldDataType, and a name:

    <fieldType<fieldDataType> <name> [<initialFieldValue>]

The fieldType indicates whether the field statement is for an eventIn, a field, an exposedField, or an eventOut. 

The initialFieldValue shall be provided for field statements with fieldType field or exposedField.

Field, eventIn, eventOut, and exposedField names shall be unique in each PROTO statement, but are not required to be unique between different PROTO statements. If a PROTO statement contains an exposedField with a given name (e.g., zzz), it shall not contain eventIns or eventOuts with the prefix set_ or the suffix _changed and the given name (e.g., set_zzz or zzz_changed).

A prototype definition consists of at least one node statement and any number of ROUTE statements, PROTO statements, and EXTERNPROTO statements in any order.

See 4.8, Prototype semantics, for a description of prototype semantics.

4.4.6 IS statement syntax

The body of a node statement that is inside a prototype definition may contain IS statements. An IS statement consists of the name of a field, exposedField, eventIn or eventOut from the node's public interface, followed by an indicator that this is an IS statement,  followed by the name of a field, exposedField, eventIn or eventOut from the prototype's interface declaration:

    <field/eventName> <isIndicator> <field/eventName>

See A.3, Nodes, for details on prototype node body grammar rules. See 4.8, Prototype semantics, for a description of IS statement semantics.

4.4.7 EXTERNPROTO statement syntax

An EXTERNPROTO statement consists of the EXTERNPROTO keyword followed in order by the prototype's name, its interface declaration, and a list (possibly empty) of double-quoted strings enclosed in square brackets. If there is only one member of the list, the brackets are optional.

    <externProtoName> <external declaration> (URL ... )

See A.2, General, for details on external prototype statement grammar rules.

An EXTERNPROTO interface declaration is the same as a PROTO interface declaration, with the exception that field and exposedField initial values are not specified and the prototype definition is specified in a separate VRML file to which the URL(s) refer.

4.4.8 USE statement syntax

A USE statement consists of a USE indicator followed by a node name:

    <useIndicator> <name>

See A.2, General, for details on USE statement grammar rules.

4.4.9 ROUTE statement syntax

A ROUTE statement consists of a routeIndicator followed in order by a node name, a field name, a toIndicator, a node name and a field name. Whitespace is allowed but not required before or after the period characters:

    <routeIndicator> <name> <field/eventName> <toIndicator> <name> <field/eventName>

See A.2, General, for details on ROUTE statement grammar rules.

--- VRML separator bar ---

4.5 Scene graph structure

4.5.1  Overview

The basic unit of the VRML  runtime environment is the scene graph. This structure contains all the objects in the system and their relationships. Relationships are contained along several axes of the scene graph. The transformation hierarchy describes the spatial relationship of rendering objects. The top level interface describes the external interface of a scene fragment with a given file scope. The behavior graph describes the connections between fields and the flow of events through the system. The media graph describes the relationships between the various timed media elements in the scene. Other relationships, such as the surface node hierarchy and texture hierarchy are described in the node reference.

4.5.2 Root nodes

A VRML file contains zero or more root nodes. The root nodes for a VRML file are those nodes defined by the node statements or USE statements that are not contained in other node or PROTO statements. Root nodes shall be children nodes (see 4.6.5, Grouping and children nodes).

4.5.3 Scene graph hierarchy

A VRML scene graph is a directed acyclic graph. Nodes can contain specific fields with one or more children nodes which participate in the hierarchy. These may, in turn, contain nodes (or instances of nodes). This hierarchy of nodes is called the scene graph. Each arc in the graph from A to B means that node A has a field whose value directly contains node B. See [FOLE] for details on hierarchical scene graphs.

4.5.4 Descendant and ancestor nodes

The descendants of a node are all of the nodes in its fields, as well as all of those nodes' descendants. The ancestors of a node are all of the nodes that have the node as a descendant.

4.5.5 Transformation hierarchy

The transformation hierarchy includes all of the root nodes and root node descendants that are considered to have one or more particular locations in the virtual world. VRML includes the notion of local coordinate systems, defined in terms of transformations from ancestor coordinate systems. The coordinate system in which the root nodes are displayed is called the world coordinate system.

An VRML browser's task is to present an VRML file to the user; it does this by presenting the transformation hierarchy to the user. The transformation hierarchy describes the directly perceptible parts of the virtual world.

Some nodes, such as sensors and environmental nodes, are in the scene graph but not affected by the transformation hierarchy.

Some nodes, such as Switch or LOD, contain a list of children, of which at most one is traversed during rendering. However, for the purposes of computing scene position, all children of these nodes are considered to be part of the transformation hierarchy, whether they are traversed during rendering or not. For instance, a Viewpoint node which is a child of a Switch whose whichChoice field is set to -1 (indicating that none of its children should be traversed during rendering) still uses the local coordinate space of the Switch to determine its position in the scene.

The transformation hierarchy shall be a directed acyclic graph; results are undefined if a node in the transformation hierarchy is its own ancestor.

Script nodes may have descendants. A descendant of a Script node is not part of the transformation hierarchy unless it is also the descendant of another node that is part of the transformation hierarchy or is a root node.

4.5.6 Standard units and coordinate system

ISO/IEC 14772 defines the unit of measure of the world coordinate system to be metres. All other coordinate systems are built from transformations based from the world coordinate system. Table 4.3 lists standard units for ISO/IEC 14772.

Table 4.3 -- Standard units

Category Unit
Linear distance Metres
Angles Radians
Time Seconds
Colour space RGB ([0.,1.], [0.,1.], [0., 1.])

ISO/IEC 14772 uses a Cartesian, right-handed, three-dimensional coordinate system. By default, the viewer is on the Z-axis looking down the -Z-axis toward the origin with +X to the right and +Y straight up. A modelling transformation (see 6.52, Transform, and 6.6, Billboard) or viewing transformation (see 6.53, Viewpoint) can be used to alter this default projection.

4.5.7 Run-time name scope

Each VRML browser defines a run-time name scope that contains all of the root nodes currently contained by the scene graph and all of the descendant nodes of the root nodes, with the exception of nodes hidden inside another name scope. Prototypes establish a name scope and therefore nodes inside prototype instances are hidden from the parent name scope. However top level fields in the prototype instance are exposed to the parent scope. So when it is desired to expose a node to the parent scope it simply needs to be placed in a field of the corresponding node type, or parent type.

Each Inline node and prototype instance also defines a run-time name scope, consisting of all of the root nodes of the file referred to by the Inline node or all of the root nodes of the prototype definition, restricted as above.

Nodes created dynamically (using the VRML SAI) are not part of any name scope, until they are added to the scene graph, at which point they become part of the same name scope of their parent node(s). A node may be part of more than one run-time name scope. A node shall be removed from a name scope when it is removed from the scene graph.

4.5.8  Top-level interface

Typically, a VRML file contains one of more top level nodes. These nodes establish the root of the scene (if the given file is the root file) or the top level nodes of a prototype instance (if the given file is the target of an EXTERNPROTO). These nodes are traversed during rendering, interaction, and other scene operations. But they have no visibility outside the scope of the file.

A VRML file may also contain top level field and function declarations. These properties constitute the external interface of the file. Agents outside the scope of the file have access to these fields and functions. If the given file is the target of an EXTERNPROTO, these fields and functions are the interface of nodes instantiated from the prototype. If the given file is the root file, these fields and functions are accessible outside the VRML runtime engine. For instance, in a page integration component, they become the interface to the page. If it is desirable to make one or more top level nodes of the file externally accessible, a field may be declared and initialized with the node value(s).

4.5.9  Behaviour graph

The event model of VRML allows the declaration of connections between fields and a model for the propagation of events along those connections. The behavior graph is the collection of these field connections. It can be changed dynamically by rerouting, adding or breaking connections. Events are injected into the system and propagate through the behavior graph in a well defined order.

Fields can only be routed to other fields with the same data type or derived from the same data type. One extension of this rule is that array objects may be routed from non-array objects of the same or derived type. This is because an array object is considered to be derived from its corresponding non-array object. For instance, if a node with a DEF name of A has a field b of type Vec3f and a node with a DEF name of C has a field d of type MF Vec3f, the following is legal:

    ROUTE A.b TO C.d
When an event is sent the destination field results in a single value which matches the source value.

4.5.10  Media graph

VRML formalizes the relationship among the various media nodes. This includes AmbientSound, AudioStream, MovieSurface and TimeBase. The AmbientSound node has a source field which is given a node of type AudioSourceNode. AudioStream is derived from AudioSourceNode and can therefore be placed in the source field. When the AudioStream plays its sound clip, it is sent to the AmbientSound node using the interfaces in AudioSourceNode where it is mixed with other sounds in the system. This generalization allows MovieSurface, which is also derived from AudioSourceNode, to be placed in the source field of AmbientSound as well. It also provides a mechanism for extending the capability of the media system. For instance, at higher levels of the audio profile there is an AudioMixer node. This is derived from AudioSourceNode and also has a source field which takes one or more instances of AudioSourceNode. This allows complex media graphs to be constructed:
    AmbientSound {
        source AudioMixer {
            source [
                AudioMixer {
                    source [
                        AudioStream {
                            url "sound1.wav"
                        }
                        AudioStream {
                            url "sound2.wav"
                        }
                    ]
                    intensity [ 0.5, 0.7 ]
                }
                AudioStream {
                    url "sound3.wav"
                }
            ]
            intensity [ 0.8, 0.9 ]
        }
    }
The above graph mixes the first two sounds with the given intensities, then mixes the result with a third sound and again adjusts the intensities. With the above structure it is possible to add filter nodes to add effects such as echo and bandpass filtering. See 2.[MPEG-4] for more information about advanced audio rendering.

Even though the syntactic structure of the media graph is identical to that of the transformation hierarchy, the semantics are quite different. In the case of audio, clips flow up the graph to the parents which performs some sort of processing and then pass the result up the graph again until a "rendering" node, such as AmbientSound, is encountered. The media graph can be thought of as a temporal graph where the transformation hierarchy is a spatial graph.

The surface related nodes have similar structure and when applied to movie surfaces have a similar temporal component. But the processing occurs on a sequence of image data rather than audio clips. For instance:

    Shape {
        appearance Appearance {
            texture Texture {
                surface MatteSurface {
                    surface1 MovieSurface {
                        url "movie.mpg"
                    }
                    surface2 ImageSurface {
                        url "gradient.png"
                    }
                    operator "REPLACE_ALPHA"
                }
            }
        }
        geometry IndexedFaceSet { ... }
    }
All surface producing nodes are derived from SurfaceNode and can therefore be placed in any of the given surface fields. In the above example, the MovieSurface is combined with the static alpha gradient, and the result is texture mapped, frame by frame to the given IndexedFaceSet. Again, new surface producing nodes can be created and added to the system to produce new image processing effects.

--- VRML separator bar ---

4.6 VRML and the World Wide Web

4.6.1 File extension and MIME types

The file extension for VRML files is .wrl (for world). File extensions for specific encodings may also be defined by that particular encoding. In these cases, the file shall behave as though .wrl were used.

The official MIME type for VRML files is defined as:

    model/vrml

where the MIME major type for 3D data descriptions is model, and the minor type for VRML documents is vrml. Additional MIME types may be specified for particular encodings. These shall be defined in the specification for that particular encoding. Such MIME types shall not allow functionality different from that specified in this part of ISO/IEC 14772.

For compatibility with earlier versions of VRML, the following MIME type shall also be supported:

    x-world/x-vrml

where the MIME major type is x-world, and the minor type for VRML documents is x-vrml.

See C.[MIME] for details.

4.6.2 URLs

A URL (Uniform Resource Locator), described in [URL], specifies a file located on a particular server and accessed through a specified protocol (e.g., http). In this part of ISO/IEC 14772, the upper-case term URL refers to a Uniform Resource Locator, while the italicized lower-case version url refers to a field which may contain URLs or in-line encoded data.

All url fields are of type MFString. The strings in these fields indicate multiple locations to search for data in decreasing order of preference. If the browser cannot locate or interpret the data specified by the first location, it might try the second and subsequent locations in order until a URL containing interpretable data is encountered. But VRML browsers only have to interpret a single URL. If no interpretable URL's are located, the node type defines the resultant default behaviour.

All web addresses used in VRML are actually URNs (Uniform Resource Name), which are a superset of the URL concept. A URN allows an abstract resolution mechanism to be invoked to locate a resource. This allows a resource to located on the local machine or a platform dependent resource to be located using the URN along with platform specific identifiers. For instance, the following URN constructs can be used:

    urn:vrml:web3d.org:textures/wood/mahogany
    urn:vrml:mycompany.com:nodes/CustomNodeSet#ExplosionEffect
The first URN would use the vrml resolver, which locates a resource using the web3d.org resolution base. The user may have previously downloaded a set of standard image assets from the WEB3D.ORG site, which were placed in a location on the local machine indexed by web3d.org base. The resolver would attempt to find the resource in the local file system, starting at this base. If found, an acceptable image suffix would be appended to the mahogany keyword and an attempt would be made to load the given texture. If no acceptable image formats were available, the resolver might go to the WEB3D.ORG site to download an acceptable asset.

The second URN would use the same resolver, but would locate resources based at the possibly proprietary mycompany.com resolution base. This base might contain resources custom to a particular browser. This URN refers to custom node implementations for this browser. So the path is searched for an executable file appropriate for the current platform. If found, the executable is loaded and added to the browser node resources and an attempt is made to find the ExplosionEffect node. If found, an instance of such a node could be created. Otherwise, the resolver might go to the MYCOMPANY.COM site to load an apporpriate implementation (after the appropriate security precautions have been taken). Or a platform independent implementation (perhaps written in Java) might be loaded.

All url fields are of type MF URL. The URL field type contains a string which can be either an absolute web address or relative to the owner of the field. Rules for resolving relative URLs are in 4.5.3. The url field takes multiple URL strings to indicate multiple locations to search for data in decreasing order of preference. If the browser cannot locate or interpret the data specified by the first location, it shall try the second and subsequent locations in order until a URL containing interpretable data is encountered. If no interpretable URL's are located, the node type defines the resultant default behaviour. This typically includes the sending of some sort of failure event to allow for handling of this exceptional condition.

More general information on URLs is described in 2.[URL].

4.6.3 Relative URLs

Relative URLs are handled as described in 2.[RURL]. The base document for nodes that contain URL fields is the VRMl file from which the statement is read. The base document for EXTERNPROTO statements or nodes that contain URL fields is:
  1. The VRML file in which the prototype is instantiated, if the statement is part of a prototype definition. 
  2. The file containing the script code, if the statement is part of a string passed to the createVrmlFromURL() or createVrmlFromString() browser calls of the VRML SAI.
  3. Otherwise, the VRML file from which the statement is read, in which case the RURL information provides the data itself.

4.6.4 Scripting language protocols

The Script node's url field may also support custom protocols for the various scripting languages.  For example, a script url prefixed with javascript: shall contain ECMAScript source, with line terminators allowed in the string. The details of each language protocol are defined in the annex of ISO/IEC 14772-2 which defines the binding for each language. Browsers which conform to a profile which supports scripting are nt required to support both the Java and ECMAScript scripting languages. Browsers shall adhere to the protocol defined in the corresponding annex of ISO/IEC 14772-2 for any scripting language which is supported. The following example illustrates the use of mixing custom protocols and standard protocols in a single url field (order of precedence determines priority):
    #VRML V3.0 utf8 
    Script {
        url [ "javascript: ...",           # custom protocol ECMAScript
              "http://bar.com/foo.js",     # std protocol ECMAScript
              "http://bar.com/foo.class" ] # std protocol Java platform bytecode
    }
In the example above, the "..." represents in-line ECMAScript source code.

--- VRML separator bar ---

4.7 Node semantics

4.7.1 Introduction

VRML has a single component hierarchy. Components representing lightweight concepts such as data storage and operations on data of that type are called fields and are derived from the Field interface. Components representing more complete spatial or temporal processing concepts are called nodes and are derived from the Node interface. All components can contain named and typed data properties holding data values for the components. For components derived from Node, these are also referred to as fields, for compatibility with VRML nomenclature. A data property can also contain a reference to a node by using the NodeRef component. This is a special component which contains not only a reference to the actual node value, but also the valid type for that field.

A data property can contain either a single value of the given type or an array of such types. Throughout this document, a field containing a single value is said to be of the given type (e.g., field a is of type Vec3f), while a field containing an array has its type prefixed by the keyword MF (e.g., field b is of type MFVec3f).

Each component has the following common characteristics:

  1. A type name. Examples include Vec3f, Color, Group, Float, AmbientSound, or SpotLight.
  2. Zero or more properties that define how each component differs from other components of the same type. A property is more commonly referred to as a field and has a name and an interface type. Property values are stored in the VRML file along with the nodes, and encode the state of the virtual world. 
  3. An implementation. The implementation of each object defines how it reacts to changes in its property values, what other property values it alters as a result of these changes, and how it effects the state of the runtime environment. This part of ISO/IEC 14772 defines the semantics of built-in nodes (i.e., nodes with implementations that are provided by the VRML browser).

A component derived from Node has the following additional characteristics:

  1. A set of events that it can receive and send. Each node may receive events to its fields which will result in some change to the node's state. Each node may also generate events from its fields to report changes in the node's state. Events generated from one node can be connected to fields of other nodes to propagate these changes. This is done using the ROUTE statement in the file or through an SAI service reference.
  2. A name. Nodes can be named using either the DEF statement in the file or through an SAI service reference. This is used by other statements to reference a specific instantiation of a node. It is also be used to locate a specific named node within the scene hierarchy.

4.7.2 Instancing semantics

A node may be referenced multiple times by a scene graph contained in a VRML browser. This does not create copies of the node. Instead, the same node is re-inserted into the scene graph, resulting in the node having multiple parents. Using an instance of a node multiple times is called node instancing

4.7.3 Naming semantics

Node names are limited in scope to a single X3D file, prototype definition or string submitted to the CreateVRMLFromString SAI service. A node name defined in a file, outside of any PROTO definition, can be referenced only by USE or ROUTE statements within that same file and also outside any PROTO definitions. A node name defined within a given PROTO can be referenced only by USE or ROUTE statements within that same PROTO. Given a node named "NewNode" (i.e., DEF NewNode), any "USE NewNode" statements in SFNode or MFNode fields inside NewNode's scope refer to NewNode (see 4.4.4, Transformation hierarchy, for restrictions on self-referential nodes).

If multiple nodes are given the same name, each USE statement refers to the closest node with the given name preceding it in either the VRML file or prototype definition.

4.7.3 Shapes, geometry and appearance

4.7.3.1 Shape node

The Shape node associates a geometry node with nodes that define that geometry's appearance. Shape nodes must be part of the transformation hierarchy to have any visible result, and the transformation hierarchy must contain Shape nodes for any geometry to be visible (the only nodes that render visible results are Shape nodes and the Background node). A Shape node contains exactly one geometry node in its geometry field, which is of type GeometryNode. The following node types are geometry nodes.

4.7.3.2 Geometric property nodes

Several geometry nodes contain Coordinate, Color, Normal, and TextureCoordinate as geometric property nodes. The geometric property nodes are defined as individual nodes so that instancing and sharing is possible between different geometry nodes.

4.7.3.3 Appearance nodes

Shape nodes may specify an Appearance node that describes the appearance properties (material and texture) to be applied to the Shape's geometry. Nodes of the following type may be specified in the material field of the Appearance node:

This set may be extended by creating new nodes subclassed from the Material class (see Subclassing).

Nodes of the following types may be specified by the texture field of the Appearance node:

This set may be extended by creating new nodes subclassed from the abstract Texture base class (see Subclassing, Abstract Node Classes).

Nodes of the following types may be specified in the textureTransform field of the Appearance node:

The interaction between the appearance properties and properties specific to geometry nodes  is described in 4.14, Lighting Model.

4.7.3.4 Common geometry fields

Certain geometry nodes have several fields that provide information about the rendering of the geometry. Such nodes are derived from FaceSetNode These fields specify the vertex ordering, if the shape is solid, if the shape contains convex faces, and at what angle a crease appears between faces, and are named ccw, solid, convex and creaseAngle, respectively.

The ccw field defines the ordering of the vertex coordinates of the geometry with respect to user-given or automatically generated normal vectors used in the lighting model equations. If ccw is TRUE, the normals shall follow the right hand rule; the orientation of each normal with respect to the vertices (taken in order) shall be such that the vertices appear to be oriented in a counterclockwise order when the vertices are viewed (in the local coordinate system of the Shape) from the opposite direction as the normal. If ccw is FALSE, the normals shall be oriented in the opposite direction. If normals are not generated but are supplied using a Normal node, and the orientation of the normals does not match the setting of the ccw field, results are undefined.

The solid field determines whether one or both sides of each polygon shall be displayed. If solid is FALSE, each polygon shall be visible regardless of the viewing direction (i.e., no backface culling shall be done, and two sided lighting shall be performed to illuminate both sides of lit surfaces). If solid is TRUE, the visibility of each polygon shall be determined as follows: Let V be the position of the viewer in the local coordinate system of the geometry. Let N be the geometric normal vector of the polygon, and let P be any point (besides the local origin) in the plane defined by the polygon's vertices. Then if (V dot N) - (N dot P) is greater than zero, the polygon shall be visible; if it is less than or equal to zero, the polygon shall be invisible (back face culled).

The convex field indicates whether all polygons in the shape are convex (TRUE). A polygon is convex if it is planar, does not intersect itself, and all of the interior angles at its vertices are less than 180 degrees. Non planar and self intersecting polygons may produce undefined results even if the convex field is FALSE.

The creaseAngle field affects how default normals are generated. If the angle between the geometric normals of two adjacent faces is less than the crease angle, normals shall be calculated so that the faces are smooth shaded across the edge; otherwise, normals shall be calculated so that a lighting discontinuity across the edge is produced. For example, a crease angle of 0.5 radians means that an edge between two adjacent polygonal faces will be smooth shaded if the geometric normals of the two faces form an angle that is less than 0.5 radians. Otherwise, the faces will appear faceted. Crease angles shall be greater than or equal to 0.0.

4.7.4 Bounding boxes

Several of the nodes include a bounding box specification comprised of two fields, bboxSize and bboxCenter. A bounding box is a rectangular parallelepiped of dimension bboxSize centred on the location bboxCenter in the local coordinate system. This is typically used by grouping nodes to provide a hint to the browser on the group's approximate size for culling optimizations. The default size for bounding boxes (-1, -1, -1) indicates that the user did not specify the bounding box and the effect shall be as if the bounding box were infinitely large. A bboxSize value of (0, 0, 0) is valid and represents a point in space (i.e., an infinitely small box). Specified bboxSize field values shall be >= 0.0 or equal to (-1, -1, -1). The bboxCenter fields specify a position offset from the local coordinate system.

The bboxCenter and bboxSize fields may be used to specify a maximum possible bounding box for the objects inside a grouping node (e.g., Transform). These are used as hints to optimize certain operations such as determining whether or not the group needs to be drawn. The bounding box shall be large enough at all times to enclose the union of the group's children's bounding boxes; it shall not include any transformations performed by the group itself (i.e., the bounding box is defined in the local coordinate system of the children). Results are undefined if the specified bounding box is smaller than the true bounding box of the group.

4.7.5 Grouping and children nodes

Grouping nodes have a field that contains a list of children nodes. Each grouping node defines a coordinate space for its children. This coordinate space is relative to the coordinate space of the node of which the group node is a child. Such a node is called a parent node. This means that transformations accumulate down the scene graph hierarchy.

The following node types are grouping nodes:

The following node types are children nodes:

The following node types are not valid as children nodes:

All grouping nodes except Inline, LOD, and Switch also have addChildren and removeChildren eventIn definitions. The addChildren event appends nodes to the grouping node's children field. Any nodes passed to the addChildren event that are already in the group's children list are ignored. For example, if the children field contains the nodes Q, L and S (in order) and the group receives an addChildren eventIn containing (in order) nodes A, L, and Z, the result is a children field containing (in order) nodes Q, L, S, A, and Z.

The removeChildren event removes nodes from the grouping node's children field. Any nodes in the removeChildren event that are not in the grouping node's children list are ignored. If the children field contains the nodes Q, L, S, A and Z and it receives a removeChildren eventIn containing nodes A, L, and Z, the result is Q, S.

Note that a variety of node types reference other node types through fields. Some of these are parent-child relationships, while others are not (there are node-specific semantics). Table 4.3 lists all node types that reference other nodes through fields.

All grouping nodes must have a children field of type NodeArrayField. Adding a node to this field will add that node to the grouping node's set of children. Adding any node to a grouping node's children field that is already in that group's child list is illegal. Adding any node to a grouping node's children field that is an ancestor of that grouping node is illegal. 

Note that a variety of node types reference other node types through fields. Some of these are parent-child relationships, while others are not (there are node-specific semantics). Table 4.3 lists all node types that reference other nodes through fields in VRML. New nodes types that reference other nodes may be defined using the extension mechanisms. The valid set of nodes types that may be referenced in fields of the node types below may similarly be extended.

Note that, for each valid node type, any node that is subclassed from that node is a legal value for that field. For example, and the geometry field of the Shape node may be filled with an IndexedFaceSet, IndexedLineSet, PointSet, or any other subclass of Geometry that is included in a given profile.

Table 4.3 -- Nodes with SFNode or MFNode fields

Node Type Field Valid Node Types for Field
Anchor children Valid children nodes
Appearance material Material
texture ImageTexture, MovieTexture, Pixel Texture
Billboard children Valid children nodes
Collision children Valid children nodes
ElevationGrid color Color
normal Normal
texCoord TextureCoordinate
Group children Valid children nodes
IndexedFaceSet color Color
coord Coordinate
normal Normal
texCoord TextureCoordinate
IndexedLineSet color Color
coord Coordinate
LOD level Valid children nodes
Shape appearance Appearance
geometry Box, Cone, Cylinder, ElevationGrid, Extrusion, IndexedFaceSet, IndexedLineSet, PointSet, Sphere, Text
Sound source AudioClip, MovieTexture
Switch choice Valid children nodes
Text fontStyle FontStyle
Transform children Valid children nodes

4.7.6 Light sources

4.7.6.1  Overview

Shape nodes are illuminated by the sum of all of the lights in the world that affect them. This includes the contribution of both the direct and ambient illumination from light sources. Ambient illumination results from the scattering and reflection of light originally emitted directly by light sources. The amount of ambient light is associated with the individual lights in the scene. This is a gross approximation to how ambient reflection actually occurs in nature.

Any node used as a source of illumination is derived from LightNode. All light sources contain an intensity, a color, and an ambientIntensity field. The intensity field specifies the brightness of the direct emission from the light, and the ambientIntensity specifies the intensity of the ambient emission from the light. Light intensity may range from 0.0 (no light emission) to 1.0 (full intensity). The color field specifies the spectral colour properties of both the direct and ambient light emission as an RGB value. The on field specifies whether the light is enabled or disabled. If the value is FALSE, then the light is disabled and will not affect any nodes in the scene. If the value is TRUE, then the light will affect other nodes according to the following scoping rules.

4.7.6.2 Scoping of lights

The affectedGroups field controls which nodes in the scene are affected by the light. By default, the field is empty, meaning that the light will affect all nodes below the light node's parent group, but no other nodes. If the affectedGroups field is not null, then it will contain a list of names of Group nodes in the scene. In this case, the light will affect all nodes contained below one of the Group nodes whose name matches an entry in the affectedGroups field. The light will not affect any other nodes. If a value in the affectedGroups field contains a name for which no matching Group node can be found, then that field value will not cause any lighting to occur; other values will still cause lighting of other groups, if matches can be found.

There is a special value for the affectedGroups field, "#Root" which specifies that the light should be scoped to affect all nodes in the viewer.

4.7.6.3  Light source nodes

The following node types are light source nodes:

All light source nodes contain an intensity, a color, and an ambientIntensity field. The intensity field specifies the brightness of the direct emission from the light, and the ambientIntensity specifies the intensity of the ambient emission from the light. Light intensity may range from 0.0 (no light emission) to 1.0 (full intensity). The color field specifies the spectral colour properties of both the direct and ambient light emission as an RGB value.

PointLight and SpotLight illuminate all objects in the world that fall within their volume of lighting influence regardless of location within the transformation hierarchy. PointLight defines this volume of influence as a sphere centred at the light (defined by a radius). SpotLight defines the volume of influence as a solid angle defined by a radius and a cutoff angle. DirectionalLight nodes illuminate only the objects descended from the light's parent grouping node, including any descendent children of the parent grouping nodes.

4.7.7 Sensor nodes

4.7.7.1 Introduction to sensors

Sensors are nodes which emit events based on some event which occurs in the environment. This event could be the passage of time, the activation of some user input device, or the alteration of other elements of the runtime environment  such as the user's viewpoint. Some sensors, such as those generating time or keyboard events, operate independent of their position in the hierarchy. Others, such as those sensing the picking of objects in the scene, are sensitive only to interaction with their peer nodes. Still others, such as those detecting the movement of the camera through the scene, are dependent on their parent transformation for placement.

The following node types are sensor nodes:

Sensors are children nodes in the hierarchy and therefore may be parented by grouping nodes as described in 4.6.5, Grouping and children nodes.

Each type of sensor defines when an event is generated. The state of the scene graph after several sensors have generated events shall be as if each event is processed separately, in order. If sensors generate events at the same time, the state of the scene graph will be undefined if the results depend on the ordering of the events.

It is possible to create dependencies between various types of sensors. For example, a TouchSensor may result in a change to a VisibilitySensor node's transformation, which in turn may cause the VisibilitySensor node's visibility status to change.

The following two sections classify sensors into two categories: environmental sensors and pointing-device sensors.

4.7.7.2 Environmental sensors

The following node types are environmental sensors:

The ProximitySensor detects when the user navigates into a specified region in the world. The ProximitySensor itself is not visible. The TimeSensor is a clock that has no geometry or location associated with it; it is used to start and stop time-based nodes such as interpolators. The VisibilitySensor detects when a specific part of the world becomes visible to the user. The Collision grouping node detects when the user collides with objects in the virtual world. Proximity, time, collision, and visibility sensors are each processed independently of whether others exist or overlap.

When environmental sensors are inserted into the transformation hierarchy and before the presentation is updated (i.e., read from file or created by a script), they shall generate events indicating any conditions which the sensor is intended to detect (see 4.10.3, Execution model). The conditions for individual sensor types to generate these initial events are defined in the individual node specifications in 6, Node reference.

The TimeSensor is a clock that has no geometry or location associated with it; it is used to start and stop time-based nodes such as interpolators. See 4.7.2, Time, and 6.50 TimeSensor, for more information.

4.7.7.3 Pointing-device sensors

Pointing-device sensors detect user pointing events such as the user clicking on a piece of geometry (i.e., TouchSensor). The following node types are pointing-device sensors:

A pointing-device sensor is activated when the user locates the pointing device over geometry that is influenced by that specific pointing-device sensor. Pointing-device sensors have influence over all geometry that is descended from the sensor's parent groups. In the case of the Anchor node, the Anchor node itself is considered to be the parent group. Typically, the pointing-device sensor is a sibling to the geometry that it influences. In other cases, the sensor is a sibling to groups which contain geometry (i.e., are influenced by the pointing-device sensor).

The appearance properties of the geometry do not affect activation of the sensor. In particular, transparent materials or textures shall be treated as opaque with respect to activation of pointing-device sensors.

For a given user activation, the lowest enabled pointing-device sensor in the hierarchy is activated. All other pointing-device sensors above the lowest enabled pointing-device sensor are ignored. The hierarchy is defined by the geometry node over which the pointing-device sensor is located and the entire hierarchy upward. If there are multiple pointing-device sensors tied for lowest, each of these is activated simultaneously and independently, possibly resulting in multiple sensors activating and generating output simultaneously. This feature allows combinations of pointing-device sensors (e.g., TouchSensor and PlaneSensor). If a pointing-device sensor appears in the transformation hierarchy multiple times (DEF/USE), it shall be tested for activation in all of the coordinate systems in which it appears.

If a pointing-device sensor is not enabled when the pointing-device button is activated, it will not generate events related to the pointing device until after the pointing device is deactivated and the sensor is enabled (i.e., enabling a sensor in the middle of dragging does not result in the sensor activating immediately).

4.7.7.4 Drag sensors

Drag sensors are a subset of pointing-device sensors. There are three types of drag sensors: CylinderSensor, PlaneSensor, and SphereSensor. Drag sensors have two eventOuts in common, trackPoint_changed and <value>_changed. These eventOuts send events for each movement of the activated pointing device according to their "virtual geometry" (e.g., cylinder for CylinderSensor). The trackPoint_changed eventOut sends the intersection point of the bearing with the drag sensor's virtual geometry. The <value>_changed eventOut sends the sum of the relative change since activation plus the sensor's offset field. The type and name of <value>_changed depends on the drag sensor type: rotation_changed for CylinderSensor, translation_changed for PlaneSensor, and rotation_changed for SphereSensor.

To simplify the application of these sensors, each node has an offset and an autoOffset exposed field. When the sensor generates events as a response to the activated pointing device motion, <value>_changed sends the sum of the relative change since the initial activation plus the offset field value. If autoOffset is TRUE when the pointing-device is deactivated, the offset field is set to the sensor's last <value>_changed value and offset sends an offset_changed eventOut. This enables subsequent grabbing operations to accumulate the changes. If autoOffset is FALSE, the sensor does not set the offset field value at deactivation (or any other time).

4.7.7.5 Activating and manipulating sensors

The pointing device controls a pointer in the virtual world. While activated by the pointing device, a sensor will generate events as the pointer moves. Typically the pointing device may be categorized as either 2D (e.g., conventional mouse) or 3D (e.g., wand). It is suggested that the pointer controlled by a 2D device is mapped onto a plane a fixed distance from the viewer and perpendicular to the line of sight. The mapping of a 3D device may describe a 1:1 relationship between movement of the pointing device and movement of the pointer.

The position of the pointer defines a bearing which is used to determine which geometry is being indicated. When implementing a 2D pointing device it is suggested that the bearing is defined by the vector from the viewer position through the location of the pointer. When implementing a 3D pointing device it is suggested that the bearing is defined by extending a vector from the current position of the pointer in the direction indicated by the pointer.

In all cases the pointer is considered to be indicating a specific geometry when that geometry is intersected by the bearing. If the bearing intersects multiple sensors' geometries, only the sensor nearest to the pointer will be eligible for activation.

4.7.8 Interpolator nodes

4.7.8.1 Abstract interpolators

Interpolators are designed to allow content authors to easily interpolate between values in a VRML scene. The following abstract definition of an Interpolator is provided upon which all types of interpolators can be built:

Interpolator {
  FloatField            fraction        = 0;    // Usage = NORMALIZED_FLOAT
  NodeField             timeSensor      = NULL; // Usage = TIME_SENSOR
  NodeFieldArray        toNode          = NULL;
  StringField           toField         = "";
}
The fraction field specifies the current fraction (from 0 to 1) of the interpolator.

The timeSensor field specifies a timeSensor for the interpolator. If the timeSensor field is NULL the interpolator fraction shall be set explicitly for the interpolator to output a new value. If the timeSensor field is non-NULL the field shall contain a TimeSensor. If the TimeSensor is currently active, the interpolator shall use the TimeSensor's getFraction method to update the interpolator's fraction field. In this case, setting the fraction field explicitly will have no effect.

The toNode NodeArrayField specifies an array of Nodes that this Interpolator should affect.

The toField StringArrayField specifies an array of Field names. There shall be exactly as many Field names in the toField field as there are Nodes in the toNode field. For each field name n in the toField[n] position there shall be a corresponding legal field name of the Node in the toNode[n] position.

Every Node that subclasses Interpolator must define a field called output. This field will store the most recently calculated output. Setting the value of this field will have no effect on the interpolator.

This part of ISO/IEC 14772 includes four interpolators, all of which interpolate linearly. These interpolators are CoordinateInterpolator, OrientationInterpolator, PositionInterpolator, and ScalarInterpolator. A discussion of linear interpolators follows.

4.7.8.2  Linear Interpolators

The specified VRML interpolator nodes are designed for linear keyframed animation. Each of these nodes defines a piecewise-linear function, f(t), on the interval (-infinity, +infinity). The piecewise-linear function is defined by n values of t, called key, and the n corresponding values of f(t), called keyValue. The keys shall be monotonically non-decreasing, otherwise the results are undefined. The keys are not restricted to any interval.

Each of these nodes evaluates f(t) given any value of t (via the fraction field) as follows: Let the n keys t0, t1, t2, ..., tn-1 partition the domain (-infinity, +infinity) into the n+1 subintervals given by (-infinity, t0), [t0, t1), [t1, t2), ... , [tn-1, +infinity). Also, let the n values v0, v1, v2, ..., vn-1 be the values of f(t) at the associated key values. The piecewise-linear interpolating function, f(t), is defined to be

     f(t) = v0, if t <= t0,
          = vn-1, if t >= tn-1, 
          = linterp(t, vi, vi+1), if ti <= t <= ti+1

     where linterp(t,x,y) is the linear interpolant, i belongs to {0,1,..., n-2}.
The third conditional value of f(t) allows the defining of multiple values for a single key, (i.e., limits from both the left and right at a discontinuity in f(t)). The first specified value is used as the limit of f(t) from the left, and the last specified value is used as the limit of f(t) from the right. The value of f(t) at a multiply defined key is indeterminate, but should be one of the associated limit values.

The following node types are interpolator nodes, each based on the type of value that is interpolated:

All specifed VRML interpolator nodes share a common set of fields and semantics:

  FloatField            fraction        = 0;    // Usage = NORMALIZED_FLOAT
  FloatArrayField       key             = [];   // Usage = NORMALIZED_FLOAT_ARRAY
  FloatArrayField       keyValue        = [];   
  <type>Field           output          = [];   
  NodeField             timeSensor      = NULL; // Usage = TIME_SENSOR 
  NodeArrayField        toNode          = NULL;
  StringArrayField      toField         = "";
The usage of the keyValue and output fields are dependent on the type of the interpolator (e.g., the PositionInterpolator's keyValue field usage is COORD3_ARRAY and its output usage is COORD3). 

Each time the interpolator node is rendered and each time a field of the interpolator that affects its output value is changed, the output field must be updated, and the output value sent to each field in the toField field on the corresponding Node in the toNode field. If the usage of the field receiving the new value is different than the usage of the output field, results are undefined. 

If a Group node's hidden field will not affect the behavior of interoplator nodes below it. Interpolators will always produce new output values during rendering.

The set_fraction eventIn receives an SFFloat event and causes the interpolator function to evaluate, resulting in a value_changed eventOut with the same timestamp as the set_fraction event.

ColorInterpolator, OrientationInterpolator, PositionInterpolator, and ScalarInterpolator output a single-value field to value_changed. Each value in the keyValue field corresponds in order to the parameter value in the key field. Results are undefined if the number of values in the key field of an interpolator is not the same as the number of values in the keyValue field.

CoordinateInterpolator sends multiple-value results to value_changed. In this case, the keyValue field is an n x m array of values, where n is the number of values in the key field and m is the number of values at each keyframe. Each m values in the keyValue field correspond, in order, to a parameter value in the key field. Each value_changed event shall contain m interpolated values. Results are undefined if the number of values in the keyValue field divided by the number of values in the key field is not a positive integer.

If an interpolator node's value eventOut is read before it receives any inputs, keyValue[0] is returned if keyValue is not empty. If keyValue is empty (i.e., [ ]), the initial value for the eventOut type is returned (e.g., (0, 0, 0) for SFVec3f); see 5, Field and event reference, for initial event values.

The location of an interpolator node in the transformation hierarchy has no effect on its operation. For example, if a parent of an interpolator node is a Switch node with whichChoice set to -1 (i.e., ignore its children), the interpolator continues to operate as specified (receives and sends events).

4.7.9 Time-dependent nodes

[Since there are three vastly different proposals for this section, the original VRML 97 text is kept. Once the issues concerning time-dependent nodes are resolved, this section can be modified appropriately.]

AudioClip, MovieTexture, and TimeSensor are time-dependent nodes that activate and deactivate themselves at specified times. Each of these nodes contains the exposedFields: startTime, stopTime, and loop, and the eventOut: isActive. The values of the exposedFields are used to determine when the node becomes active or inactive Also, under certain conditions, these nodes ignore events to some of their exposedFields. A node ignores an eventIn by not accepting the new value and not generating an eventOut_changed event. In this subclause, an abstract time-dependent node can be any one of AudioClip, MovieTexture, or TimeSensor.

Time-dependent nodes can execute for 0 or more cycles. A cycle is defined by field data within the node. If, at the end of a cycle, the value of loop is FALSE, execution is terminated (see below for events at termination). Conversely, if loop is TRUE at the end of a cycle, a time-dependent node continues execution into the next cycle. A time-dependent node with loop TRUE at the end of every cycle continues cycling forever if startTime >= stopTime, or until stopTime if  startTime < stopTime.

A time-dependent node generates an isActive TRUE event when it becomes active and generates an isActive FALSE event when it becomes inactive. These are the only times at which an isActive event is generated. In particular, isActive events are not sent at each tick of a simulation.

A time-dependent node is inactive until its startTime is reached. When time now becomes greater than or equal to startTime, an isActive TRUE event is generated and the time-dependent node becomes active (now refers to the time at which the browser is simulating and displaying the virtual world). When a time-dependent node is read from a VRML file and the ROUTEs specified within the VRML file have been established, the node should determine if it is active and, if so, generate an isActive TRUE event and begin generating any other necessary events. However, if a node would have become inactive at any time before the reading of the VRML file, no events are generated upon the completion of the read.

An active time-dependent node will become inactive when stopTime is reached if stopTime > startTime. The value of stopTime is ignored if stopTime <= startTime. Also, an active time-dependent node will become inactive at the end of the current cycle if loop is FALSE. If an active time-dependent node receives a set_loop FALSE event, execution continues until the end of the current cycle or until stopTime (if stopTime > startTime), whichever occurs first. The termination at the end of cycle can be overridden by a subsequent set_loop TRUE event.

Any set_startTime events to an active time-dependent node are ignored. Any set_stopTime event where stopTime <= startTime sent to an active time-dependent node is also ignored. A set_stopTime event where startTime < stopTime <= now sent to an active time-dependent node results in events being generated as if stopTime has just been reached. That is, final events, including an isActive FALSE, are generated and the node becomes inactive. The stopTime_changed event will have the set_stopTime value. Other final events are node-dependent (c.f., TimeSensor).

A time-dependent node may be restarted while it is active by sending a set_stopTime event equal to the current time (which will cause the node to become inactive) and a set_startTime event, setting it to the current time or any time in the future. These events will have the same time stamp and should be processed as set_stopTime, then set_startTime to produce the correct behaviour.

The default values for each of the time-dependent nodes are specified such that any node with default values is already inactive (and, therefore, will generate no events upon loading). A time-dependent node can be defined such that it will be active upon reading by specifying loop TRUE. This use of a non-terminating time-dependent node should be used with caution since it incurs continuous overhead on the simulation.

Figure 4.2 illustrates the behavior of several common cases of time-dependent nodes. In each case, the initial conditions of startTime, stopTime, loop, and the time-dependent node's cycle interval are labelled, the red region denotes the time period during which the time-dependent node is active, the arrows represent eventIns received by and eventOuts sent by the time-dependent node, and the horizontal axis represents time.

Time dependent examples

Figure 4.2 -- Examples of time-dependent node execution

 

4.7.10 Bindable children nodes

[This clause has not been changed in as much as there are at least five proposals extent. When this issue is resolved, this subclause will be changed as needed.]

The Background, Fog, NavigationInfo, and Viewpoint nodes have the unique behaviour that only one of each type can be bound (i.e., affecting the user's experience) at any instant in time. The browser shall maintain an independent, separate stack for each type of bindable node. Each of these nodes includes a set_bind eventIn and an isBound eventOut. The set_bind eventIn is used to move a given node to and from its respective top of stack. A TRUE value sent to the set_bind eventIn moves the node to the top of the stack; sending a FALSE value removes it from the stack. The isBound event is output when a given node is:

  1. moved to the top of the stack;
  2. removed from the top of the stack;
  3. pushed down from the top of the stack by another node being placed on top.

That is, isBound events are sent when a given node becomes, or ceases to be, the active node. The node at the top of stack, (the most recently bound node), is the active node for its type and is used by the browser to set the world state. If the stack is empty (i.e., either the VRML file has no bindable nodes for a given type or the stack has been popped until empty), the default field values for that node type are used to set world state. The results are undefined if a multiply instanced (DEF/USE) bindable node is bound.

The following rules describe the behaviour of the binding stack for a node of type <bindable node>, (Background, Fog, NavigationInfo, or Viewpoint):

  1. During read, the first encountered <bindable node> is bound by pushing it to the top of the <bindable node> stack. Nodes contained within Inlines, within the strings passed to the Browser.createVrmlFromString() method, or within VRML files passed to the Browser.createVrmlFromURL() method (see 4.12.10, Browser script interface)are not candidates for the first encountered <bindable node>. The first node within a prototype instance is a valid candidate for the first encountered <bindable node>. The first encountered <bindable node> sends an isBound TRUE event.
  2. When a set_bind TRUE event is received by a <bindable node>,
    1. If it is not on the top of the stack: the current top of stack node sends an isBound FALSE event. The new node is moved to the top of the stack and becomes the currently bound <bindable node>. The new <bindable node> (top of stack) sends an isBound TRUE event.
    2. If the node is already at the top of the stack, this event has no effect.
  3. When a set_bind FALSE event is received by a <bindable node> in the stack, it is removed from the stack. If it was on the top of the stack,
    1. it sends an isBound FALSE event;
    2. the next node in the stack becomes the currently bound <bindable node> (i.e., pop) and issues an isBound TRUE event.
  4. If a set_bind FALSE event is received by a node not in the stack, the event is ignored and isBound events are not sent.
  5. When a node replaces another node at the top of the stack, the isBound TRUE and FALSE eventOuts from the two nodes are sent simultaneously (i.e., with identical timestamps).
  6. If a bound node is deleted, it behaves as if it received a set_bind FALSE event (see f above).

4.7.11 Surfaces, Images and Textures

4.7.11.1  Overview

A surface is a 2D image that contains an array of colour values called pixels. Surfaces can be used as the source image of a texturing operation or an image processing operation. They can also be as the pixel values for an efficient pixel block transfer to the frame buffer.

4.7.11.2 Surface pixel values

Nodes derived from SurfaceNode specify surfaces. In all cases, a surface describes a 2D image that contains an array of colour values. The source of these pixels can be from a variety of marking engines, e.g., a simple static image file, a video stream, or a 2D, 3D, or HTML renderer. A surface may also be derived from the combination of other surfaces using various image operations. A surface may contain from one to four components of pixel data. The number of components determines the use of these components in various rendering situations. Generally speaking, surfaces are used as follows:
  1. Intensity or alpha opacity (one-component)
  2. Intensity plus alpha opacity (two-component)
  3. RGB (three-component)
  4. RGB plus alpha opacity (four-component)
Note that alpha opacity is different from transparency (alpha opacity = 1 - transparency).

4.7.11.2 Image and movie formats

Surface nodes that require support for the PNG (see 2.[PNG]) image format shall interpret the PNG pixel formats in the following way:
  1. Greyscale pixels without alpha or simple transparency are treated as one-component surfaces.
  2. Greyscale pixels with alpha or simple transparency are treated as intensity plus alpha surfaces.
  3. RGB pixels without alpha channel or simple transparency are treated as RGB surfaces.
  4. RGB pixels with alpha channel or simple transparency are treated as RGB plus alpha surfaces.
If the image specifies colours as indexed-colour (i.e., palettes or colourmaps), the following semantics shall be used (note that `greyscale' refers to a palette entry with equal red, green, and blue values):
  1. If all the colours in the palette are greyscale and there is no transparency chunk, it is treated as an intensity surface.
  2. If all the colours in the palette are greyscale and there is a transparency chunk, it is treated as an intensity plus alpha surface.
  3. If any colour in the palette is not grey and there is no transparency chunk, it is treated as a RGB surface.
  4. If any colour in the palette is not grey and there is a transparency chunk, it is treated as a RGB plus alpha surfaces.
Surface nodes that require support for JPEG files shall interpret the JPEG pixel data as follows:
  1. Greyscale files (number of components equals 1) are treated as intensity surfaces.
  2. YCbCr files are treated as RGB surfaces.
  3. No other JPEG file types are required. It is recommended that other JPEG files are treated as RGB surfaces.
Surface nodes that recommend support for GIF files shall follow the applicable semantics described above for the PNG format.

Surface nodes that require support for MPEG files (see 2.[MPEG]) are considered to contain a single frame of pixel data (determined by the media time affecting the playing of the movie) at any given moment of time. This single frame shall be treated as a RGB surface unless it is MPEG-4 Shaped Video, in which case it shall be treated as an RGB plus alpha surface.

4.7.11.3 Using surfaces as texture maps

Surface pixel data are used as the image sources for texturing operations (see x.xx Texture). In this case, one-component surfaces are interpreted as intensity values and other surface types are used as stated above. See Table 4.5 and Table 4.6 for a description of how the various texture types are applied.

4.7.11.3 Texture map image formats

Texture nodes that require support for the PNG (see 2.[PNG]) image format (6.5, Background, and 6.22, ImageTexture) shall interpret the PNG pixel formats in the following way:

  1. Greyscale pixels without alpha or simple transparency are treated as intensity textures.
  2. Greyscale pixels with alpha or simple transparency are treated as intensity plus alpha textures.
  3. RGB pixels without alpha channel or simple transparency are treated as full RGB textures.
  4. RGB pixels with alpha channel or simple transparency are treated as full RGB plus alpha textures.

If the image specifies colours as indexed-colour (i.e., palettes or colourmaps), the following semantics should be used (note that `greyscale' refers to a palette entry with equal red, green, and blue values):

  1. If all the colours in the palette are greyscale and there is no transparency chunk, it is treated as an intensity texture.
  2. If all the colours in the palette are greyscale and there is a transparency chunk, it is treated as an intensity plus opacity texture.
  3. If any colour in the palette is not grey and there is no transparency chunk, it is treated as a full RGB texture.
  4. If any colour in the palette is not grey and there is a transparency chunk, it is treated as a full RGB plus alpha texture.

Texture nodes that require support for JPEG files (see 2.[JPEG], 6.5, Background, and 6.22, ImageTexture) shall interpret JPEG files as follows:

  1. Greyscale files (number of components equals 1) are treated as intensity textures.
  2. YCbCr files are treated as full RGB textures.
  3. No other JPEG file types are required. It is recommended that other JPEG files are treated as a full RGB textures.

Texture nodes that recommend support for GIF files (see C.[GIF], 6.5, Background, and 6.22, ImageTexture) shall follow the applicable semantics described above for the PNG format.

A surface containing a single frame of a MPEG file (see 2.[MPEG]) at a given moment of time shall treat MPEG files as full RGB images.

--- VRML separator bar ---

4.8 Field semantics

Fields are placed inside node statements in a VRML file, and define the persistent state of the virtual world. Multiple values for the same field in the same node (e.g.,  Sphere { radius 1.0 radius 2.0 }) use the latter value.

Each node interprets the values of the events sent to it or generated by it according to its implementation.

Fields are described in 5, Field and event reference.

Fields can receive events and can generate events, and can be stored in VRML files. The initial value of an field is its value in the VRML file, or the default value for the node in which it is contained, if a value is not specified. When a field receives an event, it shall generate an event with the same value and timestamp. The following sources, in precedence order, shall be used to determine the initial value of the field:

  1. the user-defined value in the instantiation (if one is specified); 
  2. the default value for that field as specified in the node or prototype definition.

The rules for naming fields are as follows:

  1. All names containing multiple words start with a lower case letter, and the first letter of all subsequent words is capitalized (e.g., addChildren)

--- VRML separator bar ---

4.9 Prototype semantics

4.9.1 Introduction

The PROTO statement defines a new node type in terms of already defined (built-in or prototyped) node types or as a native extension node type. Once defined, prototyped node types may be instantiated in the scene graph exactly like the built-in node types.

VRML contains a prototyping mechanism tied closely to the object model. All objects have a prototype which was used to create them. Object instances are created through a function call to the prototype. Every prototype contains a set of top level fields and functions which become the properties of the created object. These are accessible from outside the object and are used to control it by setting field values and calling functions. Field properties can also be read to learn the state of the object and function properties can return values to the caller.

4.9.2 PROTO definition semantics

A prototype definition consists of one or more nodes, nested PROTO statements, and ROUTE statements. The first node type determines how instantiations of the prototype can be used in a VRML file. An instantiation is created by filling in the parameters of the prototype declaration and inserting copies of the first node (and its scene graph) wherever the prototype instantiation occurs. For example, if the first node in the prototype definition is a Material node, instantiations of the prototype can be used wherever a Material node can be used. Any other nodes and accompanying scene graphs are not part of the transformation hierarchy, but may be referenced by ROUTE statements or Script nodes in the prototype definition.

Nodes in the prototype definition may have their fields associated with the fields of the prototype interface declaration. This is accomplished using IS statements in the body of the node. When prototype instances are read from a VRML file, field values for the fields of the prototype interface may be given. If given, the field values are used for all nodes in the prototype definition that have IS statements for those fields. Similarly, when a prototype instance is sent an event, the event is delivered to all nodes that have IS statements for that event. When a node in a prototype instance generates an event that has an IS statement, the event is sent to any eventIns connected (via ROUTE) to the prototype instance's eventOut.

IS statements may appear inside the prototype definition wherever fields may appear. IS statements shall refer to fields or events defined in the prototype declaration. Results are undefined if an IS statement refers to a non-existent declaration. Results are undefined if the type of the field or event being associated by the IS statement does not match the type declared in the prototype's interface declaration. For example, it is illegal to associate an SFColor with an SFVec3f. It is also illegal to associate an SFColor with an MFColor or vice versa.

Results are undefined if an IS statement:

An exposedField in the prototype interface may be associated only with an exposedField in the prototype definition, but an exposedField in the prototype definition may be associated with either a field, eventIn, eventOut or exposedField in the prototype interface. When associating an exposedField in a prototype definition with an eventIn or eventOut in the prototype declaration, it is valid to use either the shorthand exposedField name (e.g., translation) or the explicit event name (e.g., set_translation or translation_changed). Table 4.4 defines the rules for mapping between the prototype declarations and the primary scene graph's nodes (yes denotes a legal mapping, no denotes an error).

Table 4.4 -- Rules for mapping PROTOTYPE declarations to node instances

                  Prototype declaration

Prototype

definition

exposedField field eventIn eventOut
exposedField yes yes yes yes
field no yes no no
eventIn no no yes no
eventOut no no no yes

Results are undefined if a field, eventIn, or eventOut of a node in the prototype definition is associated with more than one field, eventIn, or eventOut in the prototype's interface (i.e., multiple IS statements for a field, eventIn, and eventOut in a node in the prototype definition), but multiple IS statements for the fields, eventIns, and eventOuts in the prototype interface declaration is valid. Results are undefined if a field of a node in a prototype definition is both defined with initial values (i.e., field statement) and associated by an IS statement with a field in the prototype's interface. If a prototype interface has an eventOut E associated with multiple eventOuts in the prototype definition ED i , the value of E is the value of the eventOut that generated the event with the greatest timestamp. If two or more of the eventOuts generated events with identical timestamps, results are undefined.

Prototypes can be created in one of 3 ways:

Regardless of the method for construction, the resultant Proto appears the same to the runtime environment. An instance of the proto may be created, the fields of the resultant node may be read and written and functions of the node can be called.

Setting a data property can also execute some procedural logic internal to the object. This can modify the internal state of the object, modify the system in some way, or set other top level fields on the object. If the prototype is created using VRML statements, the VMRL SAI is used internally for this purpose.

For example:

    PROTO MyNode [
        field Float fractionIn
        {
            if (fractionIn >= 0 && fractionIn <= 1)
                fractionOut = 1-fractionIn;
        }
        field Float fractionOut 0
    ]
This describes a prototype for MyNode, which takes a fractionIn event and generates an inverse event at fractionOut, as long as the value is between zero and one. When fractionIn is changed, it executes its associated function, which changes fractionOut.This node could be instantiated and routed between an IntervalSensor and an interpolator to reverse the direction of the interpolation. The contents of the proto definition (everything between the brackets above could also be placed in a separate VRML file, along with the required header line, and the result would be a legal file. If this file were named "MyNode.wrl" it could be used in a proto:
    EXTERNPROTO MyNode "MyNode.wrl"
This expression would create a Proto identical to the previous example in every way. Likewise, if the same example were coded in an extension language, such as C++ or Java, it could be included as follows:
    EXTERNPROTO MyNode "MyNode.class"
Assuming the native implementation of the node created the fields, installed them into the Proto and performed the proper operation when an event was received, this expression would produce an identical Proto as well.

This symmetry allows for simplicity in the handling of prototypes and Blendo files. For instance, an implementation could choose to create the root scene by first instantiating an external prototype, giving it the name of the root Blendo file, and instantiating a node of that type.

4.9.3  Scoping rules

Prototypes have file scope and their names must therefore be unique within a given file. A prototype can appear in a file after an instantiation of that prototype. One prototype can appear inside another, but its scope is limited to the outermost prototype. However a prototype can instantiate nodes contained within the scope of an enclosing prototype or at the top level file scope. For instance:
    #VRML 3.0 utf8 ...

    PROTO Node1 [ ... ]
    PROTO Node2 [
        PROTO Node3 [
            Node2 { }     # legal, proto is in enclosing scope
            Node1 { }     # legal, proto is at top level scope
        ]
        Node3 { }         # legal, proto is at same scope level
    ]
    PROTO Node4 [
        Node3 { }         # illegal, proto is hidden in Proto2's scope
        Node2 { }         # legal, proto is at same scope level
    ]
In the above example, only the instantiation of Node3 in Node4 is illegal, because Proto3 is not accessible outside the scope of Node2.

A PROTO statement establishes a DEF/USE name scope separate from the rest of the scene and separate from any nested PROTO statements. Nodes given a name by a DEF construct inside the prototype may not be referenced in a USE construct outside of the prototype's scope. Nodes given a name by a DEF construct outside the prototype scope may not be referenced in a USE construct inside the prototype scope.

A prototype may be instantiated in a file anywhere after the completion of the prototype definition. A prototype may not be instantiated inside its own implementation (i.e., recursive prototypes are illegal).

A prototype can be exposed outside the scope of the enclosing prototype using a top level field. For instance:

    PROTO Node2 [
        field Proto node3 PROTO Node3 [ ... ]
    ]
Node3 is now exposed through the field node3. This prototype can now be instantiated declaratively using the dereferencing syntax:
    DEF P Node2 { }

    P.node3 { }
This allows entire libraries of prototypes to be included in a VRML file:
    EXTERNPROTO WidgetLibrary "WidgetLibrary.wrl"
    DEF WL WidgetLibrary { }

    WL.Button { label "go" }
    WL.Slider { min 0 max 10 color 0.8 0.2 0 }

4.9.4 PROTO interface declaration semantics

The prototype interface defines the fields and functions for the new node type. When defining a prototype using one of the VRML file encodings, an interface declaration for a field includes the type,  name, initial value and function for the field. An interface declaration for a function includes the name, return type, a list of parameter names and types, and the body of the function. Here is an example using the VRML UTF-8 encoding:
    PROTO MyNode [
        field Vec3f a 0 0 0
        field Float b ;
        field MyFieldType c { a = new Vec3f(0,c,0); }
        field Bool d true { b = new Float(d); }

        function e(num) {
            v = new MF(Vec2f);
            v.length = num;
            for (i = 0; i < num; ++i)
                v[i] = new Vec2f(i,i+1);
            return v;
        }
    ]
In this example the prototype has 4 fields. The first is initialized, but has no associated function. The second has neither a function nor an initialization value. The semicolon is required in this case as a separator between fields. The third field has a function but no initializer and the fourth has both an initializer and a function. A field without a function performs no operation, other than storage, when the field is changed. A field without an initialization value contains the default value for that field type at startup. Fields with initialization values send an initial event with that value when a node instance is created. Any routes from that field inside or outside the prototype propagate this initial event. Fields with both an initialization value and a function execute the function when the initial event is sent. Field functions are executed before any routes are propagated. Routes to fields inside the prototype propagate before routes to fields outside.

A prototype defined in one of the supported extension languages has the same attributes for fields and functions, but they are typically defined programmatically rather than declaratively.

4.9.5 Derivation and inheritance

Every prototype is derived from one or more parent prototypes. If not specified, the prototype is derived from ChildNode. A parent prototype is declared in the body of the prototype using the EXPORT statement. For example:
    PROTO MyNode [
        DEF M Material { }
        EXPORT M

        IntervalSensor {
            timeBase TimeBase { loop true }
            fraction TO CI.fraction
        }
        DEF CI ColorInterpolator {
            key [ 0 0.5 1 ]
            keyValue [ 1 0 0, 0 0 1, 1 0 0 ]
            value TO M.diffuseColor
        }
    ]
This defines a prototype for a node that can be used in place of a Material which animates diffuseColor between red and blue. Without the EXPORT statement, it would be an error to place an instance of this prototype in, say, the material field of an Appearance node.

Nodes can be derived from multiple parents by using multiple EXPORT statements. EXPORT makes all fields and functions of all parents appear as fields and functions of the derived node. Exported fields and functions are conceptually added to the derived prototype in the order in which the EXPORT statements appear. Duplicate field or function names, called overriding, take the implementation of the last addition. A field or function must be overridden with an entity of the exact same type. A field or function can be overridden in the derived prototype as well. For instance, if an implementation has a TouchSensor node and a PlaneSensor node, each with a boolean enable field, deriving a prototype from both would encounter a potential problem. The resulting enable field would control only one of the parents. To solve this, the derived prototype can override the field and perform a custom function:

    PROTO TouchPlaneSensor [
        DEF TS TouchSensor { }
        DEF PS PlaneSensor { }
        EXPORT TS
        EXPORT PS
        field Bool enable true {
            TS.enable = enable;
            PS.enable = enable;
        }
    ]
Now, when enable is changed, both the TouchSensor and PlaneSensor are correctly updated.

Sometimes, it may be necessary to export a parent type rather than the instantiated type. The EXPORT statement supports this with the AS modifier. For instance, a prototype might instantiate a TimeBase node to provide start and stop control for some function, but only export it as a TimeBaseNode so only the mediaTime field is exported. This would be done as follows:

    PROTO MediaController {
        field Bool run false {
            if (run)
                TB.startTime = Time.now();
            else TB.stopTime = Time.now();
        }
        DEF TB TimeBase { }
        EXPORT TB AS TimeBaseNode
    ]
Since the TimeBase node is exported as a TimeBaseNode, only the fields of that object (i.e., mediaTime), are exported. The resultant prototype contains two fields: mediaTime and run.

4.9.6 Private fields and functions

Normally, all top level fields and functions in a prototype are exported. But each field and function can be marked private to prevent this. This is often useful for maintaining state that is global to instance, but not visible outside, or for utility functions. In the VRML UTF8 encoding, this is done using the private keyword. For instance:
    PROTO MyNode [
        field Bool run false                   # this field is exported
        private field Bool currentState false  # this field is private to the instance
        function Bool checkState() { ... }     # this function is exported
        private function change() { ... }      # this function is private to the instance
    ]

4.9.7 Prototype initialization

The sequence of events which occur when a request is made to instantiate a prototype is as follows:
  1. An empty prototype instance node is created.
  2. Its implementation nodes are instantiated and added to the instance.
  3. The fields and functions from each parent node are created, with any necessary overriding.
  4. The fields and functions from the prototype are created, with any necessary overriding.
  5. The prototype's build( ) function, if any, is called.
  6. Any routes internal to the prototype are connected.
  7. Each field with initialization values has the value set, which may generate events internal to the instance.
  8. If an initialization value is a node, it is instantiated according to these rules before setting the value.
  9. If an initialization value is a node, its initialize( ) function, if any, is called after setting the value.
After this sequence, the node is typically added to the scene. Since the loading of a scene is identical to prototype instantiation, adding a prototyped node to the parent scene occurs at step 2 above. In fact, the loading of a scene is simply a recursive execution of the above sequence until all nodes are loaded and instantiated.

4.9.8 Anonymous prototypes

Normally, prototypes include a name to be used for later instantiation. But VRML also supports the concept of an anonymous prototype. This allows a prototype without a name to be defined and immediately instantiated. It is mainly an authoring convenience but also serves to prevent pollution of the namespace with trivial prototypes. For instance:
    DEF Invert PROTO [
        field Float fractionIn { fractionOut = 1 - fractionIn; }
        field Float fractionOut ;
    ] { fractionOut TO PI }

    IntervalSensor {
        timeBase TimeBase { loop true }
        fraction TO Invert.fractionIn
    }
    DEF PI PositionInterpolator { ... }
This example is similar to a previous one. It shows a trivial prototype that reverses the direction of a fraction.

Anonymous prototypes can be used with the EXTERNPROTO statement as well:

    EXTERNPROTO "lightcontroller.class" {
        light1 TO Light1.on
        light2 TO Light2.on
        light3 TO Light3.on
    }
    DEF Light1 DirectionalLight { ... }
    DEF Light2 DirectionalLight { ... }
    DEF Light3 DirectionalLight { ... }
In this example, a node which exerts some control over some lights in the scene is written in Java. Since only one instance of the node will be used, an anonymous prototype provides a convenient construct to use. 

--- VRML separator bar ---

4.10 External prototype semantics

4.10.1 Introduction

The EXTERNPROTO statement defines a new node type. It is equivalent to the PROTO statement, with two exceptions. First, the implementation of the node type is stored externally, either in a VRML file containing an appropriate PROTO statement or using some other implementation-dependent mechanism. Second, default values for fields are not given since the implementation will define appropriate defaults.

4.10.2 EXTERNPROTO interface semantics

The semantics of the EXTERNPROTO are exactly the same as for a PROTO statement, except that default field  values are not specified locally. In addition, events sent to an instance of an externally prototyped node may be ignored until the implementation of the node is found.

Until the definition has been loaded, the browser shall determine the initial value of fields using the following rules (in order of precedence):

  1. the user-defined value in the instantiation (if one is specified);
  2. the default value for that field type.

For eventOuts, the initial value on startup will be the default value for that field type. During the loading of an EXTERNPROTO, if an initial value of an eventOut is found, that value is applied to the eventOut and no event is generated.

The names and types of the fields of the interface declaration shall be a subset of those defined in the implementation. Declaring a field or event with a non-matching name is an error, as is declaring a field or event with a matching name but a different type.

It is recommended that user-defined field names defined in EXTERNPROTO interface statements follow the naming conventions described in 4.7, Field, eventIn, and eventOut semantics.

4.10.3 EXTERNPROTO URL semantics

The string or strings specified after the interface declaration give the location of the prototype's implementation. If multiple strings are specified, the browser searches in the order of preference (see 4.5.2, URLs).

If a URL in an EXTERNPROTO statement refers to a VRML file, the first PROTO statement found in the VRML file (excluding EXTERNPROTOs) is used to define the external prototype's definition. The name of that prototype does not need to match the name given in the EXTERNPROTO statement. Results are undefined if a URL in an EXTERNPROTO statement refers to a non-VRML file

To enable the creation of libraries of reusable PROTO definitions, browsers shall recognize EXTERNPROTO URLs that end with "#name" to mean the PROTO statement for "name" in the given VRML file. For example, a library of standard materials might be stored in a VRML file called "materials.wrl" that looks like:

    #VRML V2.0 utf8
    PROTO Gold   [] { Material { ... } }
    PROTO Silver [] { Material { ... } }
    ...etc.

A material from this library could be used as follows:

    #VRML V2.0 utf8
    EXTERNPROTO GoldFromLibrary [] "http://.../materials.wrl#Gold"
    ...
    Shape {
        appearance Appearance { material GoldFromLibrary {} }
        geometry   ...
    }
    ...

--- VRML separator bar ---

4.11 Event processing

4.11.1 Introduction

All fields in VRML can be made the recipient of events. Incoming events are data messages sent by other nodes to change some state within the receiving node. Some nodes change the contents of one or more of their fields through some internal processing, perhaps stimulated by the receipt of an event on another field of the same node. These changes can be used to effect change in fields of other nodes.

When a node is first instantiated, one or more of its fields could have an initial value, set through a declarative syntax. Such a field is set to this value at a specific time during the initialization of the node (see Prototype initialization). The setting of this value will effect change in the fields of other nodes the same as any other source of change.

4.11.2 Route semantics

The connection between the node generating the event and the node receiving the event is called a route. Routes are not nodes. The ROUTE statement is a construct for establishing event paths between specified fields of nodes. ROUTE statements may either appear at the top level of a VRML file or inside a node wherever fields may appear. The position of a ROUTE statement in a file is insignificant. It can appear before or after its source or destination node and placing a ROUTE statement within a node does not associate it with that node in any way. A ROUTE statement does follow the scoping rules as described in Scoping rules.

The type of the destination  field shall be the same as or derived from the source type. For the purposes of this rule, an array type is considered to be derived from the corresponding single value type. Therefore a value of type Vec3f can be routed to a value of type MF Vec3f. But the converse is not true; A MF Vec3f cannot be routed to a Vec3f.

Redundant routing is ignored. If a VRML file repeats a routing path, the second and subsequent identical routes are ignored. This also applies for routes created dynamically via the VRML SAI..

4.11.3 Execution model

Once an initial event is generated, usually from a node responding to some environmental stimulus, the event is propagated from the field producing it along any routes to fields in other nodes. These other nodes may respond by generating additional events, continuing until all routes have been honoured. This process is called an event cascade.

Events are processed in depth first order. Some nodes generate multiple events from a single stimulus. Similarly, it is possible that multiple initial events could arrive at the same time. In this case the simultaneous events have a well-defined order, usually described in the reference for each node. During processing, an event generated from a given node is sent to its destination, which performs its own processing and generates its own events before additional events from the original node are sent. The same is true for fields with multiple outgoing routes. Each route is fired in the order it was added and processing along one route is completed fully before the next route is fired.

During processing, deferred events may be generated. They are typically actions which are required to complete processing but must, for some reason, not be executed until the event cascade is complete. One example of this is the eventsProcessed( ) function. Any node may have this function. It is typically added by authors of prototyped nodes to perform some processing on a group of incoming events that may all arrive simultaneously. The eventsProcessed( ) function executes at the end of the cascade and can therefore be used to perform this group processing.

A deferred event is added to a list when it is discovered that it is needed. For eventsProcessed, this occurs when the first event is received on a node that has an eventsProcessed function. Events from many nodes can be added to this list, but only one event of a given node is ever added from a given node. Once the event cascade is finished, these events are executed in the order they were added. Each execution can potentially send additional events and is therefore considered a new event cascade. Because of this, the execution of deferred events could add additional events to the list. These are added after the last event added from the previous cascade, ensuring execution of all deferred events from one cascade before any events from subsequent cascades.

Below is the list of deferred events that can be added by the runtime engine. Authors can add additional deferred events through an API to the Event Manager.

4.11.4 Loops

Event cascades may contain loops where an event E is routed to a node that generates an event that eventually results in E being generated again. See 4.10.3, Execution model, for the loop breaking rule that limits each event to one event per timestamp. This rule shall also be used to break loops created by cyclic dependencies between different sensor nodes.

4.11.5 Fan-in and fan-out

Fan-in occurs when two or more routes write to the same field. Events coming into a field along different routes in the same event cascade shall be processed and the order is defined by the depth first order rule described in Execution model.

Fan-out occurs when one field is the source for more than one route. This results in sending any event generated by the field along all routes in the order the routes were added.

--- VRML separator bar ---

4.12 Time

4.12.1 Introduction

The browser controls the passage of time in a world by causing TimeSensors to generate events as time passes. Specialized browsers or authoring applications may cause time to pass more quickly or slowly than in the real world, but typically the times generated by TimeSensors will approximate "real" time. A world's creator should make no assumptions about how often a TimeSensor will generate events but can safely assume that each time event generated will have a timestamp greater than any previous time event.

4.12.2 Time origin

Time (0.0) is equivalent to 00:00:00 GMT January 1, 1970. Absolute times are specified in SFTime or MFTime fields as double-precision floating point numbers representing seconds. Negative absolute times are interpreted as happening before 1970.

Processing an event with timestamp t may only result in generating events with timestamps greater than or equal to t.

4.12.3 Discrete and continuous changes

ISO/IEC 14772 does not distinguish between discrete events (such as those generated by a TouchSensor) and events that are the result of sampling a conceptually continuous set of changes (such as the fraction events generated by a TimeSensor). An ideal VRML implementation would generate an infinite number of samples for continuous changes, each of which would be processed infinitely quickly.

Before processing a discrete event, all continuous changes that are occurring at the discrete event's timestamp shall behave as if they generate events at that same timestamp.

Beyond the requirements that continuous changes be up-to-date during the processing of discrete changes, the sampling frequency of continuous changes is implementation dependent. Typically a TimeSensor affecting a visible (or otherwise perceptible) portion of the world will generate events once per frame, where a frame is a single rendering of the world or one time-step in a simulation.

--- VRML separator bar ---

4.13 Authoring

4.13.1 Introduction

Authors often require that VRML worlds change dynamically in response to user inputs, external events, and the current state of the world. The proposition "if the vault is currently closed AND the correct combination is entered, open the vault" illustrates the type of problem which may need addressing. These kinds of decisions are expressed programmatically using the Scene Authoring Interface (SAI) either internally as Script nodes (see 6.40, Script) or externally from other application programs. These are called authoring environments. In both cases, the authoring environment can receive events, process them, and send new events. Authoring environments can keep track of information between subsequent executions (i.e., retaining internal state over time).

This subclause describes the general mechanisms and semantics of all authoring access. The VRML Scene Authoring Interface is defined in ISO/IEC 14772-2. Also defined are access mechanisms to the services of the SAI through four different authoring languages. The implementation of the Script node defined in this part of ISO/IEC 14772 shall conform to the requirements of ISO/IEC 14772-2.

For internal authoring, event processing is performed by a program or script contained in (or referenced by) the Script node's url field. This program or script may be written in any programming language that the browser supports.

4.13.2 Script execution

A Script node is activated when it receives an event. The browser shall then execute the program in the Script node's url field (passing the program to an external interpreter if necessary). The program can perform a wide variety of actions including sending out events (and thereby changing the scene), performing calculations, and communicating with servers elsewhere on the Internet. A detailed description of the ordering of event processing is contained in 4.10, Event processing.

Script nodes may also be executed at initialization and shutdown as specified in ISO/IEC 14772-1. Some scripting languages may allow the creation of separate processes from scripts, resulting in continuous execution (see 2.12.6, Asynchronous scripts).

Script nodes receive events in timestamp order. Any events generated as a result of processing an event are given timestamps corresponding to the event that generated them. Conceptually, it takes no time for a Script node to receive and process an event, even though in practice it does take some amount of time to execute a Script.

When a set_url event is received by a Script node that contains a script that has been previously initialized for a different URL, the shutdown() service of the current script is called (see 4.12.3 Initialize() and shutdown()). Until the new script becomes available, the script shall behave as though it has no executable content. When the new script becomes available, the Initialize() service is invoked as defined in 4.10.3, Execution model. The limiting case is when the URL contains inline code that can be immediately executed upon receipt of the set_url event (e.g., javascript: protocol). In this case, it can be assumed that the old code is unloaded and the new code loaded instantaneously, after any dynamic route requests have been performed.

4.13.3 Initialize() and shutdown()

The scripting language binding may define an initialize() method. This method shall be invoked before the browser presents the world to the user and before any events are processed by any nodes in the same VRML file as the Script node containing this script. Events generated by the initialize() method shall have timestamps less than any other events generated by the Script node. This allows script initialization tasks to be performed prior to the user interacting with the world.

Likewise, the scripting language binding may define a shutdown() method. This method shall be invoked when the corresponding Script node is deleted or the world containing the Script node is unloaded or replaced by another world. This method may be used as a clean-up operation, such as informing external mechanisms to remove temporary files. No other methods of the script may be invoked after the shutdown() method has completed, though the shutdown() method may invoke methods or send events while shutting down. Events generated by the shutdown() method that are routed to nodes that are being deleted by the same action that caused the shutdown() method to execute will not be delivered. The deletion of the Script node containing the shutdown() method is not complete until the execution of its shutdown() method is complete.

The specification for these two functions is contained in ISO/IEC 14772-2.

4.13.4 EventsProcessed()

The scripting language binding may define an eventsProcessed() method that is called after one or more events are received. This method allows Scripts that do not rely on the order of events received to generate fewer events than an equivalent Script that generates events whenever events are received. If it is used in some other time-dependent way, eventsProcessed() may be nondeterministic, since different browser implementations may call eventsProcessed() at different times.

For a single event cascade, a given Script node's eventsProcessed method shall be called at most once. Events generated from an eventsProcessed() method are given the timestamp of the last event processed.

The specification for this function is contained in ISO/IEC 14772-2.

4.13.5 Scripts with direct outputs

Scripts that have access to other nodes (via SFNode/MFNode fields) and that have their directOutput field set to TRUE may directly post events to those nodes. They may also read the last value sent from any of the node's fields.

When setting a value in another node, implementations are free to either immediately set the value or to defer setting the value until the Script is finished. When getting a value from another node, the value returned shall be up-to-date; that is, it shall be the value immediately before the time of the current timestamp (the current timestamp returned is the timestamp of the event that caused the Script node to execute).

If multiple directOutput Scripts read from and/or write to the same node, the results are undefined.

4.13.6 Asynchronous scripts

Some languages supported by VRML browsers may allow Script nodes to spontaneously generate events, allowing users to create Script nodes that function like new Sensor nodes. In these cases, the Script is generating the initial events that causes the event cascade, and the scripting language and/or the browser shall determine an appropriate timestamp for that initial event. Such events are then sorted into the event stream and processed like any other event, following all of the same rules including those for looping.

4.13.7 Script languages

The Script node's url field may specify a URL which refers to a file (e.g., using protocol http:) or incorporates scripting language code directly in-line. The MIME-type of the returned data defines the language type. Additionally, instructions can be included in-line using 4.5.4, Scripting language protocol, defined for the specific language (from which the language type is inferred).

For example, the following Script node has one field named start and three different URL values specified in the url field: Java, ECMAScript, and inline ECMAScript:

    Script {
      field SFBool start
      url [ "http://foo.com/fooBar.class",
        "http://foo.com/fooBar.js",
        "javascript:function start(value, timestamp) { ... }"
      ]
    }
In the above example when a start event is received by the Script node, one of the scripts found in the url field is executed. The Java platform bytecode is the first choice, the ECMAScript code is the second choice, and the inline ECMAScript code the third choice. A description of order of preference for multiple valued URL fields may be found in 4.5.2, URLs.

4.13.8 Event handling

Events received by the Script node are passed to the appropriate scripting language method in the script. The method's name depends on the language type used. In some cases, it is identical to the name of the field; in others, it is a general callback method for all events (see the scripting language annexes for details). The method is passed two arguments: the event value and the event timestamp.

4.13.9 Accessing fields and events

The fields of a Script node are accessible from scripting language methods. Events can be routed to fields of Script nodes and the fields of Script nodes can be routed to fields of other nodes. Another Script node with access to this node can access the fields just like any other node (see 4.12.5, Scripts with direct outputs).

It is recommended that user-defined field or event names defined in Script nodes follow the naming conventions described in 4.7, Field (and event) semantics.

The field values can be read or written and are persistent across method call, and changes to a field can notify the node through its update method.  See 5. Field definition for more information.  

4.13.10 Scene authoring interface

The VRML Scene authoring interface provides a wide variety of services which can be used to communicate with the VMRL scene graph. These services include the Initialize, Shutdown, and eventsProcessed services described earlier. See ISO/IEC 14772-2 for a complete specification of supported services as well as details on using these services from within Script nodes or from external programs.

--- VRML separator bar ---

4.14 Navigation

4.14.1 Introduction

Navigation is the capability of users to interact with the VRML browser using one or more input devices to affect the view it presents. Navigation support is not required for all profiles.

Every VRML scene can be thought of as containing a viewpoint from which the objects in the scene are presented to the viewer. Navigation permits the user to change the position and orientation of the viewpoint with respect to the remainder of the scene thereby enabling the user to move through the scene and examine objects in the scene.

The NavigationInfo node (see 6.29, NavigationInfo) specifies the characteristics of the desired navigation behaviour, but the exact user interface is browser-dependent. The Viewpoint node (see 6.53, Viewpoint) specifies key locations and orientations in the world to which the user may be moved via API or browser-specific user interfaces.

4.14.2 Navigation paradigms

The browser may allow the user to modify the location and orientation of the viewer in the virtual world using a navigation paradigm. Many different navigation paradigms are possible, depending on the nature of the virtual world and the task the user wishes to perform. For instance, a walking paradigm would be appropriate in an architectural walkthrough application, while a flying paradigm might be better in an application exploring interstellar space. Examination is another common use for X3D, where the scene contains one or more objects which the user wishes to view from many angles and distances.

The NavigationInfo node has a type field that specifies the navigation paradigm for this world. The actual user interface provided to accomplish this navigation is browser-dependent. See 6.29, NavigationInfo, for details.

4.14.3 Viewing model

The browser controls the location and orientation of the viewer in the world, based on input from the user (using the browser-provided navigation paradigm) and the motion of the currently bound Viewpoint node (and its coordinate system). The VRML author can place any number of viewpoints in the world at important places from which the user might wish to view the world. Each viewpoint is described by a Viewpoint node. Viewpoint nodes exist in their parent's coordinate system, and both the viewpoint and the coordinate system may be changed to affect the view of the world presented by the browser. Only one viewpoint is bound at a time. A detailed description of how the Viewpoint node operates is described in 2.6.10, Bindable children nodes, and 6.53, Viewpoint.

Navigation is performed relative to the Viewpoint's location and does not affect the location and orientation values of a Viewpoint node. The location of the viewer may be determined with a ProximitySensor node (see 6.38, ProximitySensor).

4.14.4 Collision detection and terrain following

In profiles in which collision detection is required, the NavigationInfo types of WALK, FLY, and NONE shall strictly support collision detection between the user's avatar and other objects in the scene by prohibiting navigation and/or adjusting the position of the viewpoint. However, the NavigationInfo types ANY and EXAMINE may temporarily disable collision detection during navigation, but shall not disable collision detection during the normal execution of the world. See 6.29, NavigationInfo, for details on the various navigation types.

Collision nodes can be used to generate events when viewer and objects collide, and can be used to designate that certain objects should be treated as transparent to collisions. Support for inter-object collision is not specified.

NavigationInfo nodes can be used to specify certain parameters often used by browser navigation paradigms. The size and shape of the viewer's avatar determines how close the avatar may be to an object before a collision is considered to take place. These parameters can also be used to implement terrain following by keeping the avatar a certain distance above the ground. They can additionally be used to determine how short an object must be for the viewer to automatically step up onto it instead of colliding with it.

--- VRML separator bar ---

4.15 Lighting model

4.15.1 Introduction

The VRML lighting model provides detailed equations which define the colours to apply to each geometric object. For each object, the values of the Material node, Color node and texture currently being applied to the object are combined with the lights illuminating the object and the currently bound Fog node. These equations are designed to simulate the physical properties of light striking a surface.

4.15.2 Lighting 'off'

A Shape node is unlit if either of the following is true:

  1. The shape's appearance field is NULL (default).
  2. The material field in the Appearance node is NULL (default).

Note the special cases of geometry nodes that do not support lighting (see 6.24, IndexedLineSet, and 6.36, PointSet, for details).

If the shape is unlit, the colour (Irgb) and alpha (A, 1-transparency) of the shape at each point on the shape's geometry is given in Table 4.5.

Table 4.5 -- Unlit colour and alpha mapping

Texture type Colour per-vertex
     or per-face
Colour NULL
No texture Irgb= ICrgb
A = 1
Irgb= (1, 1, 1)
A = 1
Intensity
(one-component)
Irgb= IT × ICrgb
A = 1
Irgb = (IT,IT,IT )
A = 1
Intensity+Alpha
(two-component)
Irgb= I T × ICrgb
A = AT
Irgb= (IT,IT,IT )
A = AT
RGB
(three-component)
modulateColor = false
Irgb= ITrgb
A = 1
Irgb= ITrgb
A = 1
RGB
(three-component)
modulateColor =  true
Irgb= ITrgb× ICrgb
A = 1
Irgb= ITrgb
A = 1
RGBA
(four-component)
modulateColor = false
Irgb= ITrgb
A = AT
Irgb= ITrgb A = AT
RGBA
(four-component)
modulateColor = true
Irgb= ITrgb× ICrgb
A = AT
Irgb= ITrgb
A = AT

where:

AT = normalized [0, 1] alpha value from 2 or 4 component texture image
ICrgb = interpolated per-vertex colour, or per-face colour, from Color node
IT = normalized [0, 1] intensity from 1 or 2 component texture image
ITrgb= colour from 3-4 component texture image

4.15.3 Lighting 'on'

If the shape is lit (i.e., a Material and an Appearance node are specified for the Shape), the Material and Texture nodes determine the diffuse colour for the lighting equation as specified in Table 4.6.

The Material's diffuseColor field modulates the color in the texture. Hence, a diffuseColor of white will result in the pure color of the texture, while a diffuseColor of black will result in a black diffuse factor regardless of the texture.

The Material's transparency field modulates the alpha in the texture. Hence, a transparency of 0 will result in an alpha equal to that of the texture. A transparency of 1 will result in an alpha of 0 regardless of the value in the texture.

Table 4.6 -- Lit colour and alpha mapping

Texture type Colour per-vertex
     or per-face
Color node NULL
No texture ODrgb = ICrgb
A = 1-TM
ODrgb = IDrgb
A = 1-TM
Intensity texture
(one-component)
ODrgb = IT × ICrgb
A = 1-TM
ODrgb = IT × IDrgb
A = 1-TM
Intensity+Alpha texture
(two-component)
modulateTransparency==false
ODrgb = IT × ICrgb
A = AT
ODrgb = IT × IDrgb
A = AT
Intensity+Alpha texture
(two-component)
modulateTransparency==true
ODrgb = IT × ICrgb
A = AT× 1-TM
ODrgb = IT × IDrgb
A = AT× 1-TM
RGB texture
(three-component)
modulateColor==false
ODrgb = ITrgb
A = 1-TM
ODrgb = ITrgb
A = 1-TM
RGB texture
(three-component)
modulateColor==true
ODrgb = ITrgb × ICrgb
A = 1-TM
ODrgb = ITrgb × IDrgb
A = 1-TM
RGBA texture
(four-component)
modulateColor==false
modulateTransparency==false
ODrgb = ITrgb
A = AT
ODrgb = ITrgb
A = AT
RGBA texture
(four-component)
modulateColor==true
modulateTransparency==false
ODrgb = ITrgb × ICrgb
A = AT
ODrgb = ITrgb× IDrgb
A = AT
RGBA texture
(four-component)
modulateColor==false
modulateTransparency==true
ODrgb = ITrgb
A = AT× 1-TM
ODrgb = ITrgb
A = AT× 1-TM
RGBA texture
(four-component)
modulateColor==true
modulateTransparency==true
ODrgb = ITrgb× ICrgb
A = AT× 1-TM
ODrgb = ITrgb× IDrgb
A = AT× 1-TM

where:

IDrgb = material diffuseColor
ODrgb = diffuse factor, used in lighting equations below
TM = material transparency
modulateColor = The modulateColor field from the Texture node.
modulateTransparency = The modulateTransparency field from the Texture node.

All other terms are as defined in 4.14.2, Lighting `off'.

4.15.4 Lighting equations

An ideal VRML implementation will evaluate the following lighting equation at each point on a lit surface. RGB intensities at each point on a geometry (Irgb) are given by:

Irgb = IFrgb × (1 -f0) + f0 × (OErgb + SUM( oni × attenuationi × spoti × ILrgb
                                                                          × (ambienti + diffusei + specular i)))

where:

attenuationi = 1 / max(c1 + c2 × dL + c3 × dL² , 1 )
ambienti = Iia × ODrgb × Oa

diffusei = Ii × ODrgb × ( N · L )
specular i = Ii × OSrgb × ( N · ((L + V) / |L + V|))shininess × 128

and:

· = modified vector dot product: if dot product < 0, then 0.0, otherwise, dot product
c1 , c2, c 3 = light i attenuation
dV = distance from point on geometry to viewer's position, in coordinate system of current fog node
dL = distance from light to point on geometry, in light's coordinate system
f0 = Fog interpolant, see Table 4.8 for calculation
IFrgb = currently bound fog's color
I Lrgb = light i color
Ii = light i intensity
Iia = light i ambientIntensity
L = (Point/SpotLight) normalized vector from point on geometry to light source i position
L = (DirectionalLight) -direction of light source i
N = normalized normal vector at this point on geometry (interpolated from vertex normals specified in Normal node or calculated by browser)
Oa = Material ambientIntensity
ODrgb = diffuse colour, from Material node, Color node, and/or texture node
OErgb = Material emissiveColor
OSrgb = Material specularColor
on i = 1, if light source i affects this point on the geometry,
0, if light source i does not affect this geometry (if farther away than radius for PointLight or SpotLight, outside of enclosing Group/Transform for DirectionalLights, or on field is FALSE)
shininess = Material shininess
spotAngle = acos( -L · spotDiri)
spot BW = SpotLight i beamWidth
spot CO = SpotLight i cutOffAngle
spot i = spotlight factor, see Table 4.7 for calculation
spotDiri = normalized SpotLight i direction
SUM: sum over all light sources i
V = normalized vector from point on geometry to viewer's position

Table 4.7 -- Calculation of the spotlight factor

Condition (in order)
spoti =
lighti is PointLight or DirectionalLight 1
spotAngle >= spotCO 0
spotAngle <= spotBW 1
spotBW  < spotAngle < spot CO (spotAngle - spotCO ) / (spotBW-spotCO)


Table 4.8 -- Calculation of the fog interpolant

Condition f0 =
no fog 1
fogType "LINEAR", dV < fogVisibility (fogVisibility-dV) / fogVisibility
fogType "LINEAR", dV > fogVisibility 0
fogType "EXPONENTIAL", dV < fogVisibility exp(-dV / (fogVisibility-dV ) )
fogType "EXPONENTIAL", dV > fogVisibility 0

4.15.5 References

The VRML lighting equations are based on the simple illumination equations given in E.[FOLE] and E.[OPEN].

--- VRML separator bar ---