Castle Game Engine
← Users Developers →
 
Intro
 
News
 
view3dscene
 
The Castle
 
All Programs
 
Forum
 
Donate
 
Engine
 
VRML/X3D
 
Blender
 

VRML / X3D extensions in our engine

Contents:

  1. Introduction
  2. Extensions
    1. Specify shading, force Phong shading for a shape (Shape.shading field)
    2. Screen effects (ScreenEffect node)
    3. Bump mapping (normalMap, heightMap, heightMapScale fields of Appearance)
    4. Shadow maps extensions
    5. Shadow volumes extensions
      1. Specify what lights cast shadows for shadow volumes (fields shadowVolumes and shadowVolumesMain for light nodes)
    6. Generate texture coordinates on primitives (Box/Cone/Cylinder/Sphere/Extrusion.texCoord)
    7. Output events to generate camera matrix (Viewpoint.camera*Matrix events)
    8. Generating 3D tex coords in world space (easy mirrors by additional TextureCoordinateGenerator.mode values)
    9. Tex coord generation dependent on bounding box (TextureCoordinateGenerator.mode = BOUNDS*)
    10. 3D text (node Text3D)
    11. Override alpha channel detection (field alphaChannel for ImageTexture, MovieTexture and other textures)
    12. Movies for MovieTexture can be loaded from images sequence
    13. Automatic processing of inlined content (node KambiInline)
    14. Force VRML time origin to be 0.0 at load time (KambiNavigationInfo.timeOriginAtLoad)
    15. Control head bobbing (KambiNavigationInfo.headBobbing* fields)
    16. Executing compiled-in code on Script events (compiled: Script protocol)
    17. CastleScript (castlescript: Script protocol)
    18. Precalculated radiance transfer (radianceTransfer in all X3DComposedGeometryNode nodes)
    19. Mixing VRML 1.0, 2.0, X3D nodes and features
    20. Volumetric fog (additional fields for Fog and LocalFog nodes)
    21. Inline nodes allow to include 3D models in other handled formats (Collada, 3DS, MD3, Wavefront OBJ, others) and any VRML/X3D version
    22. Specify triangulation (node KambiTriangulation)
    23. VRML files may be compressed by gzip
    24. Fields direction and up and gravityUp for PerspectiveCamera, OrthographicCamera and Viewpoint nodes
    25. Mirror material (field mirror for Material node)
    26. Customize headlight (KambiNavigationInfo.headlightNode)
    27. Fields describing physical properties (Phong's BRDF) for Material node
    28. Specify octree properties (node KambiOctreeProperties, various fields octreeXxx)
    29. Interpolate sets of colors (node ColorSetInterpolator)
    30. Extensions compatible with Avalon / instant-reality
      1. Blending factors (node BlendMode and field Appearance.blendMode)
      2. Transform by explicit 4x4 matrix (MatrixTransform node)
      3. Events logger (Logger node)
      4. Teapot primitive (Teapot node)
      5. Texture automatically rendered from a viewpoint (RenderedTexture node)
      6. Plane (Plane node)
      7. Boolean value toggler (Toggler node)
      8. Interpolate sets of floats (node VectorInterpolator)
    31. Extensions compatible with BitManagement / BS Contact
    32. VRML 1.0-specific extensions

1. Introduction

This page documents our extensions to the VRML/X3D standard: new fields, new nodes, allowing you to do something not otherwise possible in VRML/X3D.

Compatibility notes:

  • Other VRML/X3D browsers may not handle these extensions. But many VRML 2.0 / X3D extensions may be preceded by appropriate EXTERNPROTO statements, this will allow other VRML 2.0 / X3D implementations to at least gracefully omit them.

    Our VRML/X3D demo models uses the EXTERNPROTO mechanism whenever possible, so that even things inside castle_extensions/ should be partially handled by other VRML browsers.

    Our extensions are identified by URN like "urn:castle-engine.sourceforge.net:node:KambiTriangulation". For compatibility, also deprecated "urn:vrmlengine.sourceforge.net:node:KambiTriangulation" is recognized.

    Our extensions' external prototypes may specify a fallback URL http://castle-engine.sourceforge.net/fallback_prototypes.wrl for VRML 2.0. For X3D, analogous URL is http://castle-engine.sourceforge.net/fallback_prototypes.x3dv. Such fallback URL will allow other VRML browsers to partially handle our extensions. For example, see EXTERNPROTO example for Text3D — browsers that don't handle Text3D node directly should use our fallback URL and render Text3D like normal 2D text node.

    TODO: eventual goal is to make all extensions this way, so that they can be nicely omitted. Also, it would be nice to use VRML 1.0 similar feature, isA and fields, for the same purpose, but it's not implemented (and probably never will be, since VRML 1.0 is basically dead and VRML 2.0 / X3D externproto is so much better).

  • White dune parses and allows to visually design nodes with our extensions.

  • Some extensions are designed for compatibility with Avalon (instant-reality, InstantPlayer).

Conventions: fields and nodes are specified on this page in the convention somewhat similar to X3D specification:

NodeName : X3DDescendantNode {
  SF/MF-FieldType      [in,out]      fieldName   default_value  # short comment
  ...
}

[in,out] should be interpreted as:

[xxx] X3D name (for prototypes etc.) VRML 2.0 name
[] initializeOnly field
[in] inputOnly eventIn
[out] outputOnly eventOut
[in,out] inputOutput exposedField

To understand these extensions you will need some basic knowledge of VRML/X3D, you can find the official VRML / X3D specifications here.

Examples: VRML/X3D models that use these extensions may be found in our VRML/X3D demo models. Look there at directory names, in particular castle_extensions subdirectories (but also some others) are full of demos of our extensions.

2. Extensions

2.1. Specify shading, force Phong shading for a shape (Shape.shading field)

We add a simple field to the Shape node (more precisely, to the abstract X3DShapeNode):

X3DShapeNode (e.g. Shape) {
  ... all normal X3DShapeNode fields ...
  SFString   [in,out]      shading     "DEFAULT"   # ["DEFAULT"|"PHONG"]
}

For now this honors two values:

In the future, we plan to add other options to this field, like WIREFRAME, FLAT, GOURAUD. These names are not invented by us, they are the names used for "Browser options" in X3D spec (with DEFAULT added by us).

2.2. Screen effects (ScreenEffect node)

Screen Effect extensions are described here.

2.3. Bump mapping (normalMap, heightMap, heightMapScale fields of Appearance)

We add to the Appearance node new fields useful for bump mapping:

Appearance : X3DAppearanceNode {
  ... all previous Appearance fields ...
  SFNode     [in,out]      normalMap        NULL        # only 2D texture nodes (ImageTexture, MovieTexture, PixelTexture) allowed
  SFNode     [in,out]      heightMap        NULL        # deprecated; only 2D texture nodes (ImageTexture, MovieTexture, PixelTexture) allowed
  SFFloat    [in,out]      heightMapScale   0.01        # must be > 0
}
Leaf (without bump mapping)
Leaf (with bump mapping)
Lion texture (without parallax mapping)
Lion texture (with parallax mapping)

RGB channels of the texture specified as normalMap describe normal vector values of the surface. Normal vectors are encoded as colors: vector (x, y, z) should be encoded as RGB((x+1)/2, (y+1)/2, (z+1)/2).

You can use e.g. GIMP normalmap plugin to generate such normal maps from your textures. Hint: Remember to check "invert y" when generating normal maps, in image editing programs image Y grows down but we want Y (as interpreted by normals) to grow up, just like texture T coordinate.

Such normal map is enough to use the classic bump mapping method, and already enhances the visual look of your scene. For most effective results, you can place some dynamic light source in the scene — the bump mapping effect is then obvious.

You can additionally specify a height map. Since version 3.10.0 of view3dscene (2.5.0 of engine), this height map is specified within the alpha channel of the normalMap texture. This leads to easy and efficient implementation, and also it is easy for texture creators: in GIMP normal map plugin just set "Alpha Channel" to "Height". A height map allows to use more sophisticated parallax bump mapping algorithm, actually we have a full steep parallax mapping with self-shadowing implementation. This can make the effect truly amazing, but also slower.

If the height map (that is, the alpha channel of normalMap) exists, then we also look at the heightMapScale field. This allows you to tweak the perceived height of bumps for parallax mapping.

Since version 3.10.0 of view3dscene (2.5.0 of engine), new shader pipeline allows the bump mapping to cooperate with all normal VRML/X3D lighting and multi-texturing settings. So the same lights and textures are used for bump mapping lighting equations, only they have more interesting normals.

Note that bump mapping only works if you also assigned a normal (2D) texture to your shape. We assume that normal map and height map is mapped on your surface in the same way (same texture coordinates, same texture transform) as the first texture (in case of multi-texturing).

Examples:

Note: you can also use these fields within KambiAppearance node instead of Appearance. This allows you to declare KambiAppearance by EXTERNPROTO, that fallbacks on standard Appearance, and thus bump mapping extensions will be gracefully omitted by other browsers. See VRML/X3D demo models for examples.

2.4. Shadow maps extensions

Shadow Maps extensions are described here.

2.5. Shadow volumes extensions

Fountain level model, with shadow volumes.
The same fountain level model, with shadow volumes. After some interactive fun with moving/rotating stuff around :)
Werewolves with shadows
Castle "fountain" level with shadows

Specify the shadows behavior for the shadow volumes algorithm.

  • To see the shadows, it is necessary to choose one light in the scene (probably the brightest, main light) and set it's fields shadowVolumes and shadowVolumesMain both to TRUE. That's it. Everything by default is a shadow caster.

  • Demo VRML/X3D models that use dynamic shadows volumes are inside our VRML/X3D demo models, see subdirectory shadow_volumes/.

  • For shadow volumes to work, all parts of the model that are shadow casters should sum to a number of 2-manifold parts. This means that every edge has exactly 2 (not more, not less) neighbor faces, so the whole shape is a closed volume. Also, faces must be oriented consistently (e.g. CCW outside). This requirement is often quite naturally satisfiable for natural objects. Also, consistent ordering allows you to use backface culling (solid=TRUE in VRML/X3D), which is a good thing on it's own.

    In earlier engine/view3dscene versions, it was allowed for some part of the model to not be perfectly 2-manifold. But some rendering problems are unavoidable in this case. See chapter "Shadow Volumes" (inside engine documentation) for description. Since view3dscene 3.12.0, your model must be perfectly 2-manifold to cast any shadows for shadow volumes.

    You can inspect whether your model is detected as a 2-manifold by view3dscene: see menu item Help -> Manifold Edges Information. To check which edges are actually detected as border you can use View -> Fill mode -> Silhouette and Border Edges, manifold silhouette edges are displayed yellow and border edges (you want to get rid of them) are blue.

    You can also check manifold edges in Blender: you can easily detect why the mesh is not manifold by Select non-manifold command (in edit mode). Also, remember that faces must be ordered consistently CCW — in some cases Recalculate normals outside (this actually changes vertex order in Blender) may be needed to reorder them properly.

  • Shadow casters may be transparent (have material with transparency > 0), this is handled perfectly.

    However, note that all opaque shapes must be 2-manifold and separately all transparent shapes must be 2-manifold. For example, it's Ok to have some transparent box cast shadows over the model. But it's not Ok to have a shadow casting box composed from two separate VRML/X3D shapes: one shape defining one box face as transparent, the other shape defining the rest of box faces as opaque.

2.5.1. Specify what lights cast shadows for shadow volumes (fields shadowVolumes and shadowVolumesMain for light nodes)

To all VRML/X3D light nodes, we add two fields:

*Light {
  ... all normal *Light fields ...
  SFBool  [in,out]  shadowVolumes     FALSE
  SFBool  [in,out]  shadowVolumesMain  FALSE  # meaningful only when shadowVolumes = TRUE
}

The idea is that shadows are actually projected from only one light source (with shadow volumes, number of light sources is limited, since more light sources mean more rendering passes; for now, I decided to use only one light). The scene lights are divided into three groups:

  1. First of all, there's one and exactly one light that makes shadows. Which means that shadows are made where this light doesn't reach. This should usually be the dominant, most intensive light on the scene.

    This is taken as the first light node with shadowVolumesMain and shadowVolumes = TRUE. Usually you will set shadowVolumesMain to TRUE on only one light node.

  2. There are other lights that don't determine where shadows are, but they are turned off where shadows are. This seems like a nonsense from "realistic" point of view — we turn off the lights, even though they may reach given scene point ? But, in practice, it's often needed to put many lights in this group. Otherwise, the scene could be so light, that shadows do not look "dark enough".

    All lights with shadowVolumes = TRUE are in this group. (As you see, the main light has to have shadowVolumes = TRUE also, so the main light is always turned off where the shadow is).

  3. Other lights that light everything. These just work like usual VRML lights, they shine everywhere (actually, according to VRML light scope rules). Usually only the dark lights should be in this group.

    These are lights with shadowVolumes = FALSE (default).

Usually you have to experiment a little to make the shadows look good. This involves determining which light should be the main light (shadowVolumesMain = shadowVolumes = TRUE), and which lights should be just turned off inside the shadow (only shadowVolumes = TRUE). This system tries to be flexible, to allow you to make shadows look good — which usually means "dark, but not absolutely unrealistically black".

In view3dscene you can experiment with this using Edit -> Lights Editor.

If no "main" light is found (shadowVolumesMain = shadowVolumes = TRUE) then shadows are turned off on this model.

Trick: note that you can set the main light to have on = FALSE. This is the way to make "fake light" — this light will determine the shadows position (it will be treated as light source when calculating shadow placement), but will actually not make the scene lighter (be sure to set for some other lights shadowVolumes = TRUE then). This is a useful trick when there is no comfortable main light on the scene, so you want to add it, but you don't want to make the scene actually brighter.

To be deprecated some day: currently shadowVolumes and shadowVolumesMain are the only way to get shadow volumes. However, we plan in the future to instead make our X3DLightNode.shadows field (currently only for shadow maps) usable also for shadow volumes. The shadowVolumes* will become deprecated then.

2.6. Generate texture coordinates on primitives (Box/Cone/Cylinder/Sphere/Extrusion.texCoord)

We add a texCoord field to various VRML/X3D primitives. You can use it to generate texture coordinates on a primitive, by the TextureCoordinateGenerator node (for example to make mirrors), or (for shadow maps) ProjectedTextureCoordinate.

You can even use multi-texturing on primitives, by MultiGeneratedTextureCoordinate node. This works exactly like standard MultiTextureCoordinate, except only coordinate-generating children are allowed.

Note that you cannot use explicit TextureCoordinate nodes for primitives, because you don't know the geometry of the primitive. For a similar reason you cannot use MultiTextureCoordinate (as it would allow TextureCoordinate as children).

Box / Cone / Cylinder / Sphere / Extrusion {
  ...
  SFNode     [in,out]      texCoord    NULL        # [TextureCoordinateGenerator, ProjectedTextureCoordinate, MultiGeneratedTextureCoordinate]
}
MultiGeneratedTextureCoordinate : X3DTextureCoordinateNode {
  SFNode     [in,out]      metadata    NULL        # [X3DMetadataObject]
  SFNode     [in,out]      texCoord    NULL        # [TextureCoordinateGenerator, ProjectedTextureCoordinate]
}

Note: MultiGeneratedTextureCoordinate is not available in older view3dscene <= 3.7.0..

2.7. Output events to generate camera matrix (Viewpoint.camera*Matrix events)

To every viewpoint node (this applies to all viewpoints usable in our engine, including all X3DViewpointNode descendants, like Viewpoint and OrthoViewpoint, and even to VRML 1.0 PerspectiveCamera and OrthographicCamera) we add output events that provide you with current camera matrix. One use for such matrices is to route them to your GLSL shaders (as uniform variables), and use inside the shaders to transform between world and camera space.

*Viewpoint {
  ... all normal *Viewpoint fields ...
  SFMatrix4f [out]         cameraMatrix            
  SFMatrix4f [out]         cameraInverseMatrix            
  SFMatrix3f [out]         cameraRotationMatrix            
  SFMatrix3f [out]         cameraRotationInverseMatrix            
  SFBool     [in,out]      cameraMatrixSendAlsoOnOffscreenRendering  FALSE     
}

"cameraMatrix" transforms from world-space (global 3D space that we most often think within) to camera-space (aka eye-space; when thinking within this space, you know then that the camera position is at (0, 0, 0), looking along -Z, with up in +Y). It takes care of both the camera position and orientation, so it's 4x4 matrix. "cameraInverseMatrix" is simply the inverse of this matrix, so it transforms from camera-space back to world-space.

"cameraRotationMatrix" again transforms from world-space to camera-space, but now it only takes care of camera rotations, disregarding camera position. As such, it fits within a 3x3 matrix (9 floats), so it's smaller than full cameraMatrix (4x4, 16 floats). "cameraRotationInverseMatrix" is simply it's inverse. Ideal to transform directions between world- and camera-space in shaders.

"cameraMatrixSendAlsoOnOffscreenRendering" controls when the four output events above are generated. The default (FALSE) behavior is that they are generated only for camera that corresponds to the actual viewpoint, that is: for the camera settings used when rendering scene to the screen. The value TRUE causes the output matrix events to be generated also for temporary camera settings used for off-screen rendering (used when generating textures for GeneratedCubeMapTexture, GeneratedShadowMap, RenderedTexture). This is a little dirty, as cameras used for off-screen rendering do not (usually) have any relation to actual viewpoint (for example, for GeneratedCubeMapTexture, camera is positioned in the middle of the shape using the cube map). But this can be useful: when you route these events straight to the shaders, you usually need in shaders "actual camera" (which is not necessarily current viewpoint camera) matrices.

These events are usually generated only by the currently bound viewpoint node. The only exception is when you use RenderedTexture and set something in RenderedTexture.viewpoint: in this case, RenderedTexture.viewpoint will generate appropriate events (as long as you set cameraMatrixSendAlsoOnOffscreenRendering to TRUE). Conceptually, RenderedTexture.viewpoint is temporarily bound (although it doesn't send isBound/bindTime events).

2.8. Generating 3D tex coords in world space (easy mirrors by additional TextureCoordinateGenerator.mode values)

Teapot with cube map reflections

TextureCoordinateGenerator.mode allows two additional generation modes:

  1. WORLDSPACEREFLECTIONVECTOR: Generates reflection coordinates mapping to 3D direction in world space. This will make the cube map reflection simulating real mirror. It's analogous to standard "CAMERASPACEREFLECTIONVECTOR", that does the same but in camera space, making the mirror reflecting mostly the "back" side of the cube, regardless of how the scene is rotated.
  2. WORLDSPACENORMAL: Use the vertex normal, transformed to world space, as texture coordinates. Analogous to standard "CAMERASPACENORMAL", that does the same but in camera space.

2.9. Tex coord generation dependent on bounding box (TextureCoordinateGenerator.mode = BOUNDS*)

Three more values for TextureCoordinateGenerator.mode:

  1. BOUNDS: Automatically generate nice texture coordinates, suitable for 2D or 3D textures. This is equivalent to either BOUNDS2D or BOUNDS3D, depending on what type of texture is actually used during rendering.
  2. BOUNDS2D: Automatically generate nice 2D texture coordinates, based on the local bounding box of given shape. This texture mapping is precisely defined by the VRML/X3D standard at IndexedFaceSet description.
  3. BOUNDS3D: Automatically generate nice 3D texture coordinates, based on the local bounding box of given shape. This texture mapping is precisely defined by the VRML/X3D standard at Texturing3D component, section "Texture coordinate generation for primitive objects".

Following VRML/X3D standards, above texture mappings are automatically used when you supply a texture but no texture coordinates for your shape. Our extensions make it possible to also explicitly use these mappgins, when you really want to explicitly use TextureCoordinateGenerator node. This is useful when working with multi-texturing (e.g. one texture unit may have BOUNDS mapping, while the other texture unit has different mapping).

2.10. 3D text (node Text3D)

We add new node:

Text3D : X3DGeometryNode {
  MFString   [in,out]      string      []        
  SFNode     [in,out]      fontStyle   NULL      
  MFFloat    [in,out]      length      []        
  SFFloat    [in,out]      maxExtent   0         
  SFFloat    [in,out]      depth       0.1         # must be >= 0
  SFBool     [in,out]      solid       TRUE      
}

This renders the text, pretty much like Text node from VRML 97 (see VRML 97 specification about string, fontStyle, length, maxExtent fields). But the text is 3D: it's "pushed" by the amount depth into negative Z. The normal text is on Z = 0, the 3D text had front cap on Z = 0, back cap on Z = -Depth, and of course the extrusion (sides).

Also, it's natural to apply backface culling to such text, so we have a solid field. When true (default), then backface culling is done. This may provide much speedup, unless camera is able to enter "inside" the text geometry (in which case solid should be set to FALSE).

If depth is zero, then normal 2D text is rendered. However, backface culling may still be applied (if solid is true) — so this node also allows you to make 2D text that's supposed to be visible from only front side.

See our VRML/X3D demo models, file text/text_depth.wrl for example use of this.

Compatibility:

  • You should specify external prototype before using this node:
    EXTERNPROTO Text3D [
      exposedField MFString string
      exposedField SFNode fontStyle
      exposedField MFFloat length
      exposedField SFFloat maxExtent
      exposedField SFFloat depth
      exposedField SFBool solid
    ] [ "urn:castle-engine.sourceforge.net:node:Text3D",
        "http://castle-engine.sourceforge.net/fallback_prototypes.wrl#Text3D" ]
    

    This way other VRML browsers should be able to render Text3D node like normal 2D Text.

  • This is somewhat compatible to Text3D node from Parallel Graphics. At the beginning I implemented this extension differently (kambiDepth, kambiSolid fields for AsciiText and Text nodes). But later I found these Parallel Graphics Text3D definition, so I decided to make my version compatible.

2.11. Override alpha channel detection (field alphaChannel for ImageTexture, MovieTexture and other textures)

Demo of alphaChannel override

Our engine detects the alpha channel type of every texture automatically. There are three possible situations:

  1. The texture has no alpha channel (it is always opaque), or
  2. the texture has simple yes-no alpha channel (transparency rendered using alpha testing), or
  3. the texture has full range alpha channel (transparency rendered by blending, just like partially transparent materials).

The difference between yes-no and full range alpha channel is detected by analyzing alpha channel values. Developers: see AlphaChannel method reference, default tolerance values used by X3D renderer are 5 and 0.01. There is also a special program in engine sources (see examples/images_videos/detect_alpha_simple_yes_no.lpr file) if you want to use this algorithm yourself. You can also see the results for your textures if you run view3dscene with --debug-log option.

Sometimes you want to override results of this automatic detection. For example, maybe your texture has some pixels using full range alpha but you still want to use simpler rendering by alpha testing (that doesn't require sorting, and works nicely with shadow maps).

If you modify the texture contents at runtime (for example by scripts, like demo_models/castle_script/edit_texture.x3dv in demo models) you should also be aware that alpha channel detection happens only once. It is not repeated later, as this would be 1. slow 2. could cause weird rendering changes. In this case you may also want to force a specific alpha channel treatment, if initial texture contents are opaque but you want to later modify it's alpha channel.

To enable this we add new field to all texture nodes (everything descending from X3DTextureNode, like ImageTexture, MovieTexture; also Texture2 in VRML 1.0):

X3DTextureNode {
  ... all normal X3DTextureNode fields ...
  SFString   []            alphaChannel  "AUTO"      # "AUTO", "NONE", "SIMPLE_YES_NO" or "FULL_RANGE"
}

Value AUTO means that automatic detection is used, this is the default. Other values force the specific alpha channel treatment and rendering, regardless of initial texture contents.

2.12. Movies for MovieTexture can be loaded from images sequence

Fireplace demo screenshot
This movie shows how it looks animated.

Inside MovieTexture nodes, you can use an URL like my_animation_@counter(1).png to load movie from a sequence of images. This will load a series of images. We will substitute @counter(<padding>) with successive numbers starting from 0 or 1 (if filename my_animation_0.png exists, we use it; otherwise we start from my_animation_1.png).

The paramter inside @counter(<padding>) macro specifies the padding. The number will be padded with zeros to have at least the required length. For example, @counter(1).png results in names like 1.png, 2.png, ..., 9.png, 10.png... While @counter(4).png results in names like 0001.png, 0002.png, ..., 0009.png, 0010.png, ...

A movie loaded from image sequence will always run at the speed of 25 frames per second. (Developers: if you use a class like TGLVideo2D to play movies, you can customize the TGLVideo2D.FramesPerSecond property.)

A simple image filename (without @counter(<padding>) macro) is also accepted as a movie URL. This just loads a trivial movie, that consists of one frame and is always still...

Allowed image formats are just like everywhere in our engine — PNG, JPEG and many others, see glViewImage docs for the list.

Besides the fact that loading image sequence doesn't require ffmpeg installed, using image sequence has also one very important advantage over any other movie format: you can use images with alpha channel (e.g. in PNG format), and MovieTexture will be rendered with alpha channel appropriately. This is crucial if you want to have a video of smoke or flame in your game, since such textures usually require an alpha channel.

Samples of MovieTexture usage are inside our VRML/X3D demo models, in subdirectory movie_texture/.

2.13. Automatic processing of inlined content (node KambiInline)

New KambiInline node extends standard Inline node, allowing you to do something like search-and-replace automatically on inlined content.

KambiInline : Inline {
  ... all normal Inline fields ...
  MFString   [in,out]      replaceNames  []        
  MFNode     [in,out]      replaceNodes  []          # any VRML node is valid on this list
}

replaceNames specifies the node names in inlined content to search. replaceNodes are the new nodes to replace with. replaceNames and replaceNodes fields should have the same length. By default, the lists are empty and so KambiInline works exactly like standard Inline node.

An example when this is extremely useful: imagine you have a VRML file generated by exporting from some 3D authoring tool. Imagine that this tool is not capable of producing some VRML content, so you write a couple of VRML nodes by hand, and inline the generated file. For example this is your generated file, generated.wrl:

#VRML V2.0 utf8

Shape {
  geometry Box { size 1 2 3 }
  appearance Appearance {
    texture DEF Tex ImageTexture { url "test.png" }
  }
}

and this is your file created by hand, final.wrl:

#VRML V2.0 utf8

# File written by hand, because your 3D authoring tool cannot generate
# NavigationInfo node.

NavigationInfo { headlight "FALSE" }
Inline { url "generated.wrl" }

The advantage of this system is that you can get back to working with your 3D authoring tool, export as many times as you want overriding generated.wrl, and your hand-crafted content stays nicely in final.wrl.

The problem of the above example: what happens if you want to always automatically replace some part inside generated.wrl? For example, assume that your 3D authoring tool cannot export with MovieTexture node, but you would like to use it instead of ImageTexture. Of course, you could just change generated.wrl in any text editor, but this gets very tiresome and dangerous if you plan to later regenerate generated.wrl from 3D authoring tool: you would have to remember to always replace ImageTexture to MovieTexture after exporting. Needless to say, it's easy to forget about such thing, and it gets very annoying when there are more replaces needed. Here's when KambiInline comes to help. Imagine that you use the same generated.wrl file, and as final.wrl you will use

#VRML V2.0 utf8

# File written by hand, because your 3D authoring tool cannot generate
# MovieTexture node.

KambiInline {
  url "generated.wrl"
  replaceNames "Tex"
  replaceNodes MovieTexture { url "test.avi" }
}

Each time when loading final.wrl, our engine will automatically replace in the VRML graph node Tex with specified MovieTexture. Of course the "replacing" happens only in the memory, it's not written back to any file, your files are untouched. Effectively, the effect is like you would load a file

#VRML V2.0 utf8

Shape {
  geometry Box { size 1 2 3 }
  appearance Appearance {
    texture MovieTexture { url "test.avi" }
  }
}

2.14. Force VRML time origin to be 0.0 at load time (KambiNavigationInfo.timeOriginAtLoad)

By default, VRML/X3D time origin is at 00:00:00 GMT January 1, 1970 and SFTime reflects real-world time (taken from your OS). This is somewhat broken idea in my opinion, unsuitable for normal single-user games. So you can change this by using KambiNavigationInfo node:

KambiNavigationInfo : NavigationInfo {
  ... all normal NavigationInfo fields ...
  SFBool     []            timeOriginAtLoad    FALSE     
}

The default value, FALSE, means the standard VRML behavior. When TRUE the time origin for this VRML scene is considered to be 0.0 when browser loads the file. For example this means that you can easily specify desired startTime values for time-dependent nodes (like MovieTexture or TimeSensor) to start playing at load time, or a determined number of seconds after loading of the scene.

2.15. Control head bobbing (KambiNavigationInfo.headBobbing* fields)

"Head bobbing" is the effect of camera moving slightly up and down when you walk on the ground (when gravity works). This simulates our normal human vision — we can't usually keep our head at the exact same height above the ground when walking or running :) By default our engine does head bobbing (remember, only when gravity works; that is when the navigation mode is WALK). This is common in FPS games.

Using the extensions below you can tune (or even turn off) the head bobbing behavior. For this we add new fields to the KambiNavigationInfo node (introduced in the previous section, can be simply used instead of the standard NavigationInfo).

KambiNavigationInfo : NavigationInfo {
  ... all normal NavigationInfo fields, and KambiNavigationInfo fields documented previously ...
  SFFloat    [in,out]      headBobbing             0.02      
  SFFloat    [in,out]      headBobbingTime         0.5       
}

Intuitively, headBobbing is the intensity of the whole effect (0 = no head bobbing) and headBobbingTime determines the time of a one step of a walking human.

The field headBobbing multiplied by the avatar height specifies how far the camera can move up and down. The avatar height is taken from the standard NavigationInfo.avatarSize (2nd array element). Set this to exact 0 to disable head bobbing. This must always be < 1. For sensible effects, this should be something rather close to 0. (Developers: see also TWalkCamera.HeadBobbing property.)

The field headBobbingTime determines how much time passes to make full head bobbing sequence (camera swing up and then down back to original height). (Developers: see also TWalkCamera.HeadBobbingTime property.)

2.16. Executing compiled-in code on Script events (compiled: Script protocol)

A special Script protocol "compiled:" allows programmers to execute compiled-in code on normal Script events. "Compiled-in code" means simply that you write a piece of code in ObjectPascal and register it after creating the scene. This piece of code will be executed whenever appropriate script will receive an event (when eventIn of the Script is received, or when exposedField is changed by event, or when the script receives initialize or shutdown notifications).

This should be very handy for programmers that integrate our VRML engine in their own programs, and would like to have some programmed response to some VRML events. Using Script node allows you to easily connect programmed code to the VRML graph: you write the code in Pascal, and in VRML you route anything you want to your script.

For example consider this Script:

  DEF S Script {
    inputOnly SFTime touch_event
    inputOnly SFBool some_other_event
    inputOnly SFInt32 yet_another_event
    url "compiled:
initialize=script_initialization
touch_event=touch_handler
some_other_event=some_other_handler
" }

  DEF T TouchSensor { }
  ROUTE T.touchTime TO S.touch_event

This means that handler named touch_handler will be executed when user will activate TouchSensor. As additional examples, I added handler named script_initialization to be executed on script initialization, and some_other_handler to execute when some_other_event is received. Note that nothing will happen when yet_another_event is received.

As you see, compiled: Script content simply maps VRML/X3D event names to Pascal compiled handler names. Each line maps event_name=handler_name. Lines without = character are understood to map handler of the same name, that is simple line event_name is equivalent to event_name=event_name.

To make this actually work, you have to define and register appropriate handlers in your Pascal code. Like this:

type
  TMyObject = class
    procedure ScriptInitialization(Value: TX3DField; const Time: TX3DTime);
    procedure TouchHandler(Value: TX3DField; const Time: TX3DTime);
  end;

procedure TMyObject.ScriptInitialization(Value: TX3DField; const Time: TX3DTime);
begin
  { ... do here whatever you want ...

    Value parameter is nil for script initialize/shutdown handler.
  }
end;

procedure TMyObject.TouchHandler(Value: TX3DField; const Time: TX3DTime);
begin
  { ... do here whatever you want ...

    Value parameter here contains a value passed to Script.touch_event.
    You can cast it to appropriate field type and get it's value,
    like "(Value as TSFTime).Value".

    (Although in case of this example, Value here will always come from
    TouchSensor.touchTime, so it will contain the same thing
    as our Time.Seconds parameter. But in general case, Value can be very useful to you.)
  }
end;

  { ... and somewhere after creating TCastleSceneCore (or TCastleScene) do this: }

  Scene.RegisterCompiledScript('script_initialization', @MyObject.ScriptInitialization);
  Scene.RegisterCompiledScript('touch_handler', @MyObject.TouchHandler);

For working example code in Pascal and VRML/X3D of this, see castle_game_engine/examples/3d_rendering_processing/call_pascal_code_from_3d_model_script.lpr in engine sources.

2.17. CastleScript (castlescript: Script protocol)

We have a simple scripting language that can be used inside Script nodes. See CastleScript documentation (with examples).

2.18. Precalculated radiance transfer (radianceTransfer in all X3DComposedGeometryNode nodes)

Normal OpenGL lighting
Rendering with simple ambient occlusion
Precomputed Radiance Transfer
X3DComposedGeometryNode : X3DGeometryNode {
  ... all normal X3DComposedGeometryNode fields ...
  MFVec3f    [in,out]      radianceTransfer  []        
}

The field radianceTransfer specifies per-vertex values for Precomputed Radiance Transfer. For each vertex, a vector of N triples is specified (this describes the radiance transfer of this vertex). We use Vec3f, since our transfer is for RGB (so we need 3 values instead of one). The number of items in radianceTransfer must be a multiple of the number of coord points.

Since this field is available in X3DComposedGeometryNode, PRT can be used with most of the VRML/X3D geometry, like IndexedFaceSet. Note that when using PRT, the color values (color, colorPerVertex fields) are ignored (TODO: in the future I may implement mixing). We also add this field to VRML 1.0 IndexedFaceSet, so with VRML 1.0 this works too.

For PRT to work, the object with radianceTransfer computed must keep this radianceTransfer always corresponding to current coords. This means that you either don't animate coordinates, or you animate coords together with radianceTransfer fields. TODO: make precompute_xxx work with animations, and make an example of this.

For more information, see kambi_vrml_game_engine/examples/vrml/radiance_transfer/ demo in engine sources.

TODO: currently radianceTransfer is read but ignored by view3dscene and simple VRML browser components. This means that you have to write and compile some ObjectPascal code (see above radiance_transfer/ example) to actually use this in your games.

2.19. Mixing VRML 1.0, 2.0, X3D nodes and features

Because of the way how I implemented VRML 1.0, 2.0 and X3D handling, you have effectively the sum of all VRML features available. Which means that actually you can mix VRML 1.0 and 2.0 and X3D nodes to some extent. If given node name exists in two VRML/X3D versions, then VRML/X3D file header defines how the node behaves. Otherwise, node behaves according to it's VRML/X3D specification.

For example, this means that a couple of VRML 2.0/X3D nodes are available (and behave exactly like they should) also for VRML 1.0 authors:

If you're missing an orthographic viewpoint in VRML 2.0, you can use VRML 1.0 OrthographicCamera or you ca use X3D OrthoViewpoint.

If you're missing GLSL shaders in VRML 2.0, you can use X3D programmable shaders inside VRML 2.0.

You can also freely include VRML 1.0 files inside VRML 2.0, or X3D, or the other way around.

2.20. Volumetric fog (additional fields for Fog and LocalFog nodes)

We add to all X3DFogObject nodes (Fog and LocalFog) additional fields to allow easy definition of volumetric fog:
X3DFogObject {
  ... all normal X3DFogObject fields ...
  SFBool   [in,out]      volumetric                    FALSE   
  SFVec3f  [in,out]      volumetricDirection           0 -1 0    # any non-zero vector
  SFFloat  [in,out]      volumetricVisibilityStart     0       
}

When "volumetric" is FALSE (the default), every other "volumetricXxx" field is ignored and you have normal (not volumetric) fog following the VRML/X3D specification. When "volumetric" is TRUE, then the volumetric fog described below is used.

"volumetricDirection" determines in which direction density of the fog increases (that is, fog color is more visible). It must not be a zero vector. It's length doesn't matter. Every vertex of the 3D scene is projected on the "volumetricDirection" vector, attached to the origin of fog node coordinate system (TODO: for now, origin of global coordinate system). From the resulting signed distance along this vector we subtract "volumetricVisibilityStart", and then use the result to determine fog amount, just like it would be a distance to the camera for normal fog.

For example in the default case when "volumetricDirection" is (0, -1, 0), then the negated Y coordinate of every vertex determines the amount of fog (that is, fog density increases when Y decreases).

The effect of "volumetricVisibilityStart" is to shift where fog starts. Effectively, fog density changes between the distances "volumetricVisibilityStart" (no fog) and "volumetricVisibilityStart + visibilityRange" (full fog). Remember that "visibilityRange" must be >= 0, as required by VRML/X3D specification. Note that fogType still determines how values between are interpolated, so the fog may be linear or exponential, following normal VRML/X3D equations.

For example if your world is oriented such that the +Y is the "up", and ground is on Y = 0, and you want your fog to start from height Y = 20, you should set "volumetricDirection" to (0, -1, 0) (actually, that's the default) and set "volumetricVisibilityStart" to -20 (note -20 instead of 20; flipping "volumetricDirection" flips also the meaning of "volumetricVisibilityStart").

The "volumetricVisibilityStart" is transformed by the fog node transformation scaling, just like "visibilityRange" in VRML/X3D spec.

Oh, and note that in our programs for now EXPONENTIAL fog (both volumetric and not) is actually approximated by OpenGL exponential fog. Equations for OpenGL exponential fog and VRML exponential fog are actually different and incompatible, so results will be a little different than they should be.

Our VRML/X3D demo models have test models for this (see fog/fog_volumetric/ subdirectory there). Also our games malfunction and The Castle use it.

2.21. Inline nodes allow to include 3D models in other handled formats (Collada, 3DS, MD3, Wavefront OBJ, others) and any VRML/X3D version

All inline nodes (Inline in X3D, Inline and InlineLoadControl in VRML >= 2.0 and WWWInline in VRML 1.0) allow you to include any 3D model format understood by our engine. So you can inline not only other VRML/X3D files, but also Collada, 3DS, MD3, Wavefront OBJ models. Internally, all those formats are converted to X3D before displaying anyway. If you want to precisely know how the conversion to X3D goes, you can always try the explicit conversion by "File -> Save as X3D" menu option in view3dscene.

Also, you can freely mix VRML/X3D versions when including. You're free to include VRML 1.0 file inside VRML 2.0 file, or X3D, or the other way around. Everything works.

This also works for jumping to scenes by clicking on an Anchor node — you can make an Anchor to any VRML/X3D version, or a Collada, 3DS, etc. file.

2.22. Specify triangulation (node KambiTriangulation)

KambiTriangulation demo screenshot

New node:

KambiTriangulation : X3DChildNode {
  SFInt32  [in,out]      quadricSlices    -1     # {-1} + [3, infinity)
  SFInt32  [in,out]      quadricStacks    -1     # {-1} + [2, infinity)
  SFInt32  [in,out]      rectDivisions    -1     # [-1, infinity)
}

This node affects rendering of subsequent Sphere, Cylinder, Cone and Cube nodes. For VRML 1.0 you can delimit the effect of this node by using Separator node, just like with other VRML "state changing" nodes. For VRML 2.0 every grouping node (like Group) always delimits this, so it only affects nodes within it's parent grouping node (like many other VRML 2.0 nodes, e.g. DirectionalLight or sensors).

When rendering sphere, cylinder, cone or cube we will triangulate (divide the surfaces into triangles) with settings specified in last KambiTriangulation node. quadricSlices divides like pizza slices, quadricStacks divides like tower stacks, rectDivisions divides rectangular surfaces of a Cube. More precise description of this triangulation is given at description of --detail-... options in view3dscene documentation. Comments given there about so-called over-triangulating apply also here.

Special value -1 for each of these fields means that the program can use it's default value. In case of view3dscene and rayhunter they will use values specified by command-line options --detail-... (or just compiled-in values (see source code) if you didn't specify --detail-... options).

Note that this node gives only a hints to the renderer. Various algorithms and programs may realize triangulation differently, and then hints given by this node may be interpreted somewhat differently or just ignored. That said, this node is useful when you're designing some VRML models and you want to fine-tune the compromise between OpenGL rendering speed and quality of some objects. Generally, triangulate more if the object is large or you want to see light effects (like light spot) looking good. If the object is small you can triangulate less, to get better rendering time.

Test VRML file: see our VRML/X3D demo models, file vrml_2/castle_extensions/kambi_triangulation.wrl.

2.23. VRML files may be compressed by gzip

All our programs can handle VRML files compressed with gzip.

E.g. you can call view3dscene like

      view3dscene my_compressed_vrml_file.wrl.gz
    
and you can use WWWInline nodes that refer to gzip-compressed VRML files, like
      WWWInline { name "my_compressed_vrml_file.wrl.gz" }
    

Filenames ending with .wrl.gz or .wrz are assumed to be always compressed by gzip.

Files with normal extension .wrl but actually compressed by gzip are also handled OK. Currently, there's a small exception to this: when you give view3dscene VRML file on stdin, this file must be already uncompressed (so you may need to pipe your files through gunzip -c). TODO: this is intended to be fixed, although honestly it has rather low priority now.

A personal feeling about this feature from the author (Kambi): I honestly dislike the tendency to compress the files with gzip and then change the extension back to normal .wrl. It's handled by our engine, but only because so many people do it. I agree that it's often sensible to compress VRML files by gzip (especially since before X3D, there was no binary encoding for VRML files). But when you do it, it's also sensible to leave the extension as .wrl.gz, instead of forcing it back into .wrl, hiding the fact that contents are compressed by gzip. Reason: while many VRML browsers detect the fact that file is compressed by gzip, many other programs, that look only at file extension, like text editors, do not recognize that it's gzip data. So they treat .wrl file as a stream of unknown binary data. Programs that analyze only file contents, like Unix file, see that it's a gzip data, but then they don't report that it's VRML file (since this would require decompressing).

Also note that WWW servers, like Apache, when queried by modern WWW browser, can compress your VRML files on the fly. So, assuming that VRML browsers (that automatically fetch URLs) will be also intelligent, the compression is done magically over HTTP protocol, and you don't have to actually compress VRML files to save bandwidth.

2.24. Fields direction and up and gravityUp for PerspectiveCamera, OrthographicCamera and Viewpoint nodes

Standard VRML way of specifying camera orientation (look direction and up vector) is to use orientation field that says how to rotate standard look direction vector (<0,0,-1>) and standard up vector (<0,1,0>). While I agree that this way of specifying camera orientation has some advantages (e.g. we don't have the problem with the uncertainty "Is look direction vector length meaningful ?") I think that this is very uncomfortable for humans.

Reasoning:

  1. It's very difficult to write such orientation field by human, without some calculator. When you set up your camera, you're thinking about "In what direction it looks ?" and "Where is my head ?", i.e. you're thinking about look and up vectors.
  2. Converting between orientation and look and up vectors is trivial for computers but quite hard for humans without a calculator (especially if real-world values are involved, that usually don't look like "nice numbers"). Which means that when I look at source code of your VRML camera node and I see your orientation field — well, I still have no idea how your camera is oriented. I have to fire up some calculating program, or one of programs that view VRML (like view3dscene). This is not some terrible disadvantage, but still it matters for me.
  3. orientation is written with respect to standard look (<0,0,-1>) and up (<0,1,0>) vectors. So if I want to imagine camera orientation in my head — I have to remember these standard vectors.
  4. 4th component of orientation is in radians, that are not nice for humans (when specified as floating point constants, like in VRMLs, as opposed to multiplies of π, like usually in mathematics). E.g. what's more obvious for you: "1.5707963268 radians" or "90 degrees" ? Again, these are equal for computer, but not readily equal for human (actually, "1.5707963268 radians" is not precisely equal to "90 degrees").

Also, VRML 2.0 spec says that the gravity upward vector should be taken as +Y vector transformed by whatever transformation is applied to Viewpoint node. This also causes similar problems, since e.g. to have gravity upward vector in +Z you have to apply rotation to your Viewpoint node.

So I decided to create new fields for PerspectiveCamera, OrthographicCamera and Viewpoint nodes to allow alternative way to specify an orientation:

PerspectiveCamera / OrthographicCamera / Viewpoint {
  ... all normal *Viewpoint fields ...
  MFVec3f    [in,out]      direction   []        
  MFVec3f    [in,out]      up          []        
  SFVec3f    [in,out]      gravityUp   0 1 0     
}

If at least one vector in direction field is specified, then this is taken as camera look vector. Analogous, if at least one vector in up field is specified, then this is taken as camera up vector. This means that if you specify some vectors for direction and up then the value of orientation field is ignored. direction and up fields should have either none or exactly one element.

As usual, direction and up vectors can't be parallel and can't be zero. They don't have to be orthogonal — up vector will be always silently corrected to be orthogonal to direction. Lengths of these vectors are always ignored.

As for gravity: VRML 2.0 spec says to take standard +Y vector and transform it by whatever transformation was applied to Viewpoint node. So we modify this to say take gravityUp vector and transform it by whatever transformation was applied to Viewpoint node. Since the default value for gravityUp vector is just +Y, so things work 100% conforming to VRML spec if you don't specify gravityUp field.

In view3dscene "Print current camera node" command (key shortcut Ctrl+C) writes camera node in both versions — one that uses orientation field and transformations to get gravity upward vector, and one that uses direction and up and gravityUp fields.

2.25. Mirror material (field mirror for Material node)

You can mark surfaces as being mirrors by using this field.
Material {
  ... all normal Material fields ...
  MFFloat/SFFloat [in,out]      mirror      0.0         # [0.0; 1.0]
}

Currently this is respected only by classic ray-tracer in view3dscene and rayhunter. Well, it's also respected by path-tracer, although it's much better to use fields describing physical properties (Phong's BRDF) for Material node when using path-tracer. In the future mirror field may be somehow respected with normal OpenGL rendering in view3dscene and others.

For VRML 1.0
This field is of multi- type (MFFloat), just like other Material fields in VRML 1.0; this way you can specify many material kinds for one shape node (like IndexedFaceSet).
For VRML 2.0
This field is of simple SFFloat type, just like other Material fields in VRML 2.0.

0.0 means no mirror (i.e. normal surface), 1.0 means the perfect mirror (i.e. only reflected color matters). Values between 0.0 and 1.0 mean that surface's color is partially taken from reflected color, partially from surface's own material color.

Note that this field can be (ab)used to specify completely unrealistic materials. That's because it's not correlated in any way with shininess and specularColor fields. In the Real World the shininess of material is obviously closely correlated with the ability to reflect environment (after all, almost all shiny materials are also mirrors, unless they have some weird curvature; both shininess and mirroring work by reflecting light rays). However, in classic ray-tracer these things are calculated in different places and differently affect the resulting look (shininess and specularColor calculate local effect of the lights, and mirror calculates how to mix with the reflected color). So the actual "shiny" or "matte" property of material is affected by shininess and specularColor fields as well as by mirror field.

2.26. Customize headlight (KambiNavigationInfo.headlightNode)

Spot headlight with per-pixel lighting
Castle level with sharp spot headlight
Castle level with smooth spot headlight
Castle level with smooth spot headlight

You can configure the appearance of headlight by the headlightNode field of KambiNavigationInfo node. KambiNavigationInfo is just a replacement of standard NavigationInfo, adding some extensions specific to our engine.

KambiNavigationInfo : NavigationInfo {
  ... all KambiNavigationInfo fields so far ...
  SFNode [in,out]      headlightNode              NULL    # [X3DLightNode]
}

headlightNode defines the type and properties of the light following the avatar ("head light"). You can put any valid X3D light node here. If you don't give anything here (but still request the headlight by NavigationInfo.headlight = TRUE, which is the default) then the default DirectionalLight will be used for headlight.

  • Almost everything (with the exceptions listed below) works as usual for all the light sources. Changing colors and intensity obviously work. Changing the light type, including making it a spot light or a point light, also works.

    Note that for nice spot headlights, you will usually want to enable per-pixel lighting on everything by View->Shaders->Enable For Everything. Otherwise the ugliness of default fixed-function Gouraud shading will be visible in case of spot lights (you will see how the spot shape "crawls" on the triangles, instead of staying in a nice circle). So to see the spot light cone perfectly, and also to see SpotLight.beamWidth perfectly, enable per-pixel shader lighting.

    Note that instead of setting headlight to spot, you may also consider cheating: you can create a screen effect that simulates the headlight. See view3dscene "Screen Effects -> Headlight" for demo, and screen effects documentation for ways to create this yourself. This is an entirely different beast, more cheating but also potentially more efficient (for starters, you don't have to use per-pixel lighting on everything to make it nicely round).

  • Your specified "location" of the light (if you put here PointLight or SpotLight) will be ignored. Instead we will synchronize light location in each frame to the player's location (in world coordinates).

    You can ROUTE your light's location to something, to see it changing.

  • Similarly, your specified "direction" of the light (if this is DirectionalLight or SpotLight) will be ignored. Instead we will keep it synchronized with the player's normalized direction (in world coordinates). You can ROUTE this direction to see it changing.

  • The "global" field doesn't matter. Headlight always shines on everything, ignoring normal VRML/X3D light scope rules.

History: We used to configure headlight by different, specialized node. This is still parsed but ignored in new versions:

KambiHeadLight : X3DChildNode {
  SFFloat    [in,out]      ambientIntensity      0           # [0.0, 1.0]
  SFVec3f    [in,out]      attenuation           1 0 0       # [0, infinity)
  SFColor    [in,out]      color                 1 1 1       # [0, 1]
  SFFloat    [in,out]      intensity             1           # [0, 1]
  SFBool     [in,out]      spot                  FALSE     
  SFFloat    [in,out]      spotDropOffRate       0         
  SFFloat    [in,out]      spotCutOffAngle       π/4    
}

2.27. Fields describing physical properties (Phong's BRDF) for Material node

In rayhunter's path-tracer I implemented Phong's BRDF. To flexibly operate on material's properties understood by Phong's BRDF you can use the following Material node's fields:
Material {
  ... all normal Material fields ...
  MFColor                          [in,out]      reflSpecular          []          # specular reflectance
  MFColor                          [in,out]      reflDiffuse           []          # diffuse reflectance
  MFColor                          [in,out]      transSpecular         []          # specular transmittance
  MFColor                          [in,out]      transDiffuse          []          # diffuse transmittance
  SFFloat (MFFloat in VRML 1.0)    [in,out]      reflSpecularExp       1000000     # specular reflectance exponent
  SFFloat (MFFloat in VRML 1.0)    [in,out]      transSpecularExp      1000000     # specular transmittance exponent
}

Short informal description how these properties work (for precise description see Phong's BRDF equations or source code of my programs):

reflectance
tells how the light rays reflect from the surface.
transmittance
tells how the light rays transmit into the surface (e.g. inside the water or thick glass).
diffuse
describe the property independent of light rays incoming direction.
specular
describe the property with respect to the light rays incoming direction (actually, it's the angle between incoming direction and the vector of perfectly reflected/transmitted ray that matters).
specular exponent
describe the exponent for cosinus function used in equation, they say how much the specular color should be focused around perfectly reflected/transmitted ray.

For VRML 1.0, all these fields have multi- type (like other fields of Material node) to allow you to specify many material kinds at once. For VRML >= 2.0 (includes X3D) only the four non-exponent fields are of multi- type, this is only to allow you to specify zero values there and trigger auto-calculation (see below). Otherwise, you shouldn't place more than one value there for VRML >= 2.0.

Two *SpecularExp fields have default values equal to 1 000 000 = 1 million = practically infinity (bear in mind that they are exponents for cosinus). Other four fields have very special default values. Formally, they are equal to zero-length arrays. If they are left as being zero-length arrays, they will be calculated as follows :

  • reflSpecular := vector <mirror, mirror, mirror>
  • reflDiffuse := diffuseColor
  • transSpecular := vector <transparency, transparency, transparency>
  • transDiffuse := diffuseColor * transparency

This way you don't have to use any of described here 6 fields. You can use only standard VRML fields (and maybe mirror field) and path tracer will use sensible values derived from other Material fields. If you will specify all 6 fields described here, then path tracer will completely ignore most other Material colors (normal diffuseColor, specularColor etc. fields will be ignored by path tracer then; only emissiveColor will be used, to indicate light sources).

You can use kambi_mgf2inv program to convert MGF files to VRML 1.0 with these six additional Material fields. So you can easily test my ray-tracer using your MGF files.

These fields are used only by path tracer in rayhunter and view3dscene.

2.28. Specify octree properties (node KambiOctreeProperties, various fields octreeXxx)

Octree visualization

Like most 3D engines, Castle Game Engine uses a smart tree structure to handle collision detection in arbitrary 3D worlds. The structure used in our engine is the octree, with a couple of special twists to handle dynamic scenes. See documentation chapter "octrees" for more explanation.

There are some limits that determine how fast the octree is constructed, how much memory does it use, and how fast can it answer collision queries. While our programs have sensible and tested defaults hard-coded, it may be useful (or just interesting for programmers) to test other limits — this is what this extension is for.

In all honesty, I (Michalis) do not expect this extension to be commonly used... It allows you to tweak an important, but internal, part of the engine. For most normal people, this extension will probably look like an uncomprehensible black magic. And that's Ok, as the internal defaults used in our engine really suit (almost?) all practical uses.

If the above paragraph didn't scare you, and you want to know more about octrees in our engine: besides documentation chapter "octrees" you can also take a look at the (source code and docs) of the TCastleSceneCore.Spatial property.

A new node:

KambiOctreeProperties : X3DNode {
  SFInt32 []            maxDepth              -1      # must be >= -1
  SFInt32 []            leafCapacity          -1      # must be >= -1
}

Limit -1 means to use the default value hard-coded in the program. Other values force the generation of octree with given limit. For educational purposes, you can make an experiment and try maxDepth = 0: this forces a one-leaf tree, effectively making octree searching work like a normal linear searching. You should see a dramatic loss of game speed on non-trivial models then.

To affect the scene octrees you can place KambiOctreeProperties node inside KambiNavigationInfo node. For per-shape octrees, we add new fields to Shape node:

KambiNavigationInfo : NavigationInfo {
  ... all KambiNavigationInfo fields so far ...
  SFNode []            octreeRendering            NULL    # only KambiOctreeProperties node
  SFNode []            octreeDynamicCollisions    NULL    # only KambiOctreeProperties node
  SFNode []            octreeVisibleTriangles     NULL    # only KambiOctreeProperties node
  SFNode []            octreeCollidableTriangles  NULL    # only KambiOctreeProperties node
}
X3DShapeNode (e.g. Shape) {
  ... all normal X3DShapeNode fields ...
  SFNode []            octreeTriangles            NULL    # only KambiOctreeProperties node
}

See the API documentation for classes TCastleSceneCore and TShape for precise description about what each octree is. In normal simulation of dynamic 3D scenes, we use only octreeRendering, octreeDynamicCollisions and Shape.octreeTriangles octrees. Ray-tracers usually use octreeVisibleTriangles.

We will use scene octree properties from the first bound NavigationInfo node (see VRML/X3D specifications about the rules for bindable nodes). If this node is not KambiNavigationInfo, or appropriate octreeXxx field is NULL, or appropriate field within KambiOctreeProperties is -1, then the default hard-coded limit will be used.

Currently, it's not perfectly specified what happens to octree limits when you bind other [Kambi]NavigationInfo nodes during the game. With current implementation, this will cause the limits to change, but they will be actually applied only when the octree will be rebuild — which may happen never, or only at some radical rebuild of VRML graph by other events. So if you have multiple [Kambi]NavigationInfo nodes in your world, I advice to specify in all of them exactly the same octreeXxx fields values.

2.29. Interpolate sets of colors (node ColorSetInterpolator)

ColorSetInterpolator docs are at the bottom of "Interpolation component" page.

2.30. Extensions compatible with Avalon / instant-reality

We handle some Avalon / instant-reality extensions. See instant-reality and in particular the specifications of Avalon extensions.

Please note that I implemented this all looking at Avalon specifications, which are quite terse. Please report any incompatibilities.

2.30.1. Blending factors (node BlendMode and field Appearance.blendMode)

Various blend modes with transparent teapots

We add new field to Appearance node: blendMode (SFNode, NULL by default, inputOutput). It's allowed to put there BlendMode node to specify blending mode done for partially-transparent objects. BlendMode node is not X3D standard, but it's specified by Avalon: see BlendNode specification.

From Avalon spec, our engine supports a subset of fields: srcFactor, destFactor, color, colorTransparency. Note that some values require newer OpenGL versions, we will eventually fallback on browser-specific blending modes (you can set them explicitly in view3dscene).

For example:

  appearance Appearance {
    material Material {
      transparency 0.5
    }
    blendMode BlendMode {
      srcFactor "src_alpha" # actually this srcFactor is the default
      destFactor "one"
    }
  }

Example above sets blending to an often-desired equation where the order of rendering doesn't matter. It's very useful for models with complex 3D partially-transparent objects, otherwise traditional approach (src_alpha and one_minus_src_alpha) may cause rendering artifacts.

2.30.2. Transform by explicit 4x4 matrix (MatrixTransform node)

MatrixTransform: supported matrix field, and the standard X3DGroupingNode fields.

This is analogous to Transform node, but specifies explicit 4x4 matrix. Note that VRML 1.0 also had MatrixTransform node (we also handle it), although specified a little differently. Later VRML 97 and X3D removed the MatrixTransform node from official specification — this extension fills the gap.

Note that this node was removed from specifications for a good reason. Our engine can invert your matrix internally (this is needed for some things, like bump mapping), but still it's difficult to extract particular features (like scaling factor) from such matrix. Currently our engine extracts scaling factors by very naive method. (Although this is planned to be fixed using unmatrix.c algorithm.) The bottom line is: You are well advised to try to express all transformations using stardard Transform node.

This node may be useful when you really have no choice (for example, when converting from Collada files that have transformation written as explicit 4x4 matrix, it's natural to convert it to VRML MatrixTransform).

2.30.3. Events logger (Logger node)

Logger, extremely useful debugger when playing with VRML / X3D routes and events. This is based on, and should be quite compatible, with Avalon Logger node. (Except our interpretation of logFile, which is probably quite different, see below.)

Logger : X3DChildNode {
  SFNode     [in,out]      metadata    NULL        # [X3DMetadataObject]
  SFInt32    [in,out]      level       1         
  SFString   []            logFile     ""        
  SFBool     [in,out]      enabled     TRUE      
  XFAny      [in]          write                 
}
Logger node demo

The idea is simple: whatever is sent to write input event is logged on the console. write event has special type, called XFAny (also following Avalon) that allows to receive any VRML field type.

Other properties allow to control logging better. When enabled is false, nothing is logged. level controls the amount of logged info:

  1. nothing,
  2. log sending field name, type, timestamp,
  3. additionally log received value,
  4. additionally log sending node name, type.

logFile, when non-empty, specifies the filename to write log information to. (When logFile is empty, it's all simply dumped on standard output, i.e. usually console.) As a security measure (you really do not want to allow an author of X3D file to overwrite arbitrary files without asking user), in my implementation only the basename of the logFile matters, the file is always saved into current directory. Moreover, filename is like view3dscene_logger_XXX_%d.log, where "view3dscene" is the name of the program, "XXX" is the name specified in logFile, and "%d" is just next free number. This way logger output file is predictable, and should never overwrite your data.

These security measures were added by my implementation — Avalon spec simply says that logFile is the name of the file, I don't know how they handled security problems with logFile.

2.30.4. Teapot primitive (Teapot node)

Teapot node demo

A teapot. Useful non-trivial shape for testing various display modes, shaders and such.

Compatibility with Avalon Teapot: we support size and solid fields from Avalon. The geometry orientation and dimensions is the same (although our actual mesh tries to be a little better :) ). Fields texCoord and manifold are our own (Kambi engine) extensions.

Teapot : X3DGeometryNode {
  SFNode     [in,out]      metadata    NULL        # [X3DMetadataObject]
  SFVec3f    []            size        3 3 3     
  SFBool     []            solid       TRUE      
  SFBool     []            manifold    FALSE     
  SFNode     [in,out]      texCoord    NULL        # [TextureCoordinateGenerator, ProjectedTextureCoordinate, MultiGeneratedTextureCoordinate]
}

The "size" field allows you to scale the teapot, much like the standard Box node. The default size (3, 3, 3) means that the longest size of teapot bounding box is 3.0 (all other sizes are actually slightly smaller). Changing size scales the teapot (assuming that size = 3 means "default size").

The "texCoord" field may contain a texture-generating node. Very useful to quickly test various texture coordinate generators (e.g. for cube env mapping) on teapot. When texCoord is not present but texture coordinates are required (because appearance specifies a texture), we will generate default texture coords (using the same alrgorithm as for IndexedFaceSet).

The "solid" field has standard meaning: if true (default), it's assumed that teapot will never be seen from the inside (and backface culling is used to speed up rendering).

The "manifold" field allows you to force teapot geometry to be correctly closed (2-manifold, where each edge has exactly 2 neighboring faces). This is useful if you want to use shadow volumes to cast shadow of this teapot.

For the sake of VRML / X3D standards, I do not really advice using this node... VRML developers should spend their time better than to implement such nodes of little practical use :), and it's possible to make the same thing with a PROTO. But it's useful for testing purposes.

2.30.5. Texture automatically rendered from a viewpoint (RenderedTexture node)

RenderedTexture demo
RenderedTexture with background and mirrors thrown in

Texture rendered from a specified viewpoint in the 3D scene. This can be used for a wide range of graphic effects, the most straighforward use is to make something like a "security camera" or a "portal", through which a player can peek what happens at the other place in 3D world.

This is mostly compatible with Avalon RenderedTexture specification. We do not support all Avalon fields, but the basic fields and usage remain the same. This is also implemented in Xj3D, in a compatible way.

RenderedTexture : X3DTextureNode {
  SFNode     [in,out]      metadata              NULL             # [X3DMetadataObject]
  MFInt32    [in,out]      dimensions            128 128 4 1 1  
  SFString   [in,out]      update                "NONE"           # ["NONE"|"NEXT_FRAME_ONLY"|"ALWAYS"]
  SFNode     [in,out]      viewpoint             NULL             # [X3DViewpointNode] (VRML 1.0 camera nodes also allowed)
  SFNode     []            textureProperties     NULL             # [TextureProperties]
  SFBool     []            repeatS               TRUE           
  SFBool     []            repeatT               TRUE           
  SFBool     []            repeatR               TRUE           
  MFBool     [in,out]      depthMap              []             
  SFMatrix4f [out]         viewing                              
  SFMatrix4f [out]         projection                           
  SFBool     [out]         rendering                            
}

First two numbers in "dimensions" field specify the width and the height of the texture. (Our current implementation ignores the rest of dimensions field.)

"update" is the standard field for automatically generated textures (works the same as for GeneratedCubeMapTexture or GeneratedShadowMap). It says when to actually generate the texture: "NONE" means never, "ALWAYS" means every frame (for fully dynamic scenes), "NEXT_FRAME_ONLY" says to update at the next frame (and afterwards change back to "NONE").

"viewpoint" allows you to explicitly specify viewpoint node from which to render to texture. Default NULL value means to render from the current camera (this is equivalent to specifying viewpoint node that is currently bound). Yes, you can easily see recursive texture using this, just look at the textured object. It's quite fun :) (It's not a problem for rendering speed — we always render texture only once in a frame.) You can of course specify other viewpoint node, to make rendering from there.

"textureProperties" is the standard field of all texture nodes. You can place there a TextureProperties node to specify magnification, minification filters (note that mipmaps, if required, will always be correctly automatically updated for RenderedTexture), anisotropy and such.

"repeatS", "repeatT", "repeatR" are also standard for texture nodes, specify whether texture repeats or clamps. For RenderedTexture, you may often want to set them to FALSE. "repeatR" is for 3D textures, useless for now.

"depthMap", if it is TRUE, then the generated texture will contain the depth buffer of the image (instead of the color buffer as usual). (Our current implementation only looks at the first item of MFBool field depthMap.)

"rendering" output event sends a TRUE value right before rendering to the texture, and sends FALSE after. It can be useful to e.g. ROUTE this to a ClipPlane.enabled field. This is our (Kambi engine) extension, not present in other implementations. In the future, "scene" field will be implemented, this will allow more flexibility, but for now the simple "rendering" event may be useful.

"viewing" and "projection" output events are also send right before rendering, they contain the modelview (camera) and projection matrices.

TODO: "scene" should also be supported. "background" and "fog" also. And the default background / fog behavior should change? To match the Xj3D, by default no background / fog means that we don't use them, currently we just use the current background / fog.

2.30.6. Plane (Plane node)

Avalon Plane node. You should instead use Rectangle2D node from X3D 3.2 when possible, this is implemented only for compatibility.

Our current implementation doesn't support anything more than size and solid fields. So it's really equivalent to Rectangle2D inside our engine, the only difference being that Plane.solid is TRUE by default (for Rectangle2D spec says it's FALSE by default).

2.30.7. Boolean value toggler (Toggler node)

Avalon Toggler node. A simple event utility for setting/observing a boolean value in various ways. Something like a standard X3D BooleanToggle on steroids. We support this node fully, according to instantreality specs.

Toggler  : X3DChildNode {
  SFNode     [in,out]      metadata    NULL        # [X3DMetadataObject]
  SFBool     [in,out]      status      FALSE     
  SFBool     [in,out]      notStatus   TRUE      
  XFAny      [in]          toggle                  # the type/value send here is ignored
  XFAny      [in]          set                     # the type/value send here is ignored
  XFAny      [in]          reset                   # the type/value send here is ignored
  SFBool     [out]         changed                 # always sends TRUE
  SFBool     [out]         on                      # always sends TRUE
  SFBool     [out]         off                     # always sends TRUE
  SFBool     [in,out]      enabled     TRUE      
}

"status" is the boolean value stored. "notStatus" is always the negated value of "status". You can set either of them (by sending "set_status" or "set_notStatus"). Changing any of them causes both output events to be send. That is, "status_changed" receives the new boolean value stored, "notStatus_changed" received the negation of the new value stored.

The input events "toggle", "set" and "reset" provide altenative ways to change the stored boolean value. They accept any VRML/X3D type/value as input (this is called XFAny by Avalon), and the value send is actually completely ignored. "toggle" always toggles (negates) the stored value, "set" changes the stored value to TRUE, "reset" changes the stored value to FALSE.

The output events "changed", "on", "off" provide altenative ways to observe the stored boolean value. They always generate a boolean TRUE value when specific thing happens. "changed" event is generated when the value changes, "on" event is generated when the value changes to TRUE, "off" event is generated when the value changes to FALSE.

"enabled" allows to disable input/output events handling. When enabled = FALSE then sending input events to above fields has no effect (stored boolean value doesn't change), and no output events are generated.

2.30.8. Interpolate sets of floats (node VectorInterpolator)

VectorInterpolator docs are at the bottom of "Interpolation component" page.

2.31. Extensions compatible with BitManagement / BS Contact

We have a (very crude) implementation of some BitManagement specific extensions:

  • Circle (treat as standard Circle2D)
  • Layer2D, Layer3D, OrderedGroup (treat as standard Group)
  • MouseSensor (does nothing, we merely parse it Ok)

2.32. VRML 1.0-specific extensions

VRML 1.0-specific extensions are described here.