Monday, May 28, 2007

models instead of primitives


The same world drawn on Google Earth sattelite layer, and the same world moved to swf Flash file using Kml as data. Distances are preserved. Two visible models (torus and cone-cube) were modeled using 3D Studio Max and placed into Google Earth.

Now in the last post I described how to render simple view, however static image is not so fascinating, it just shows that the idea works. Let’s give some moves to camera. We have to implement a simple walking, observation mechanism based on keyboard and mouse. Remember two other listeners that we implemented? Here is the place where we are going to make use of them. DisplayObject3D object like camera have already implemented methods like moveForward, moveBackward, so what we do is create Boolean variables for all four arrows. Set them true when KEY_DOWN event is dispatched, false if KEY_UP. Then we create a method which we will invoke every time before renderCamera is invoked, so that the camera view changes and we get the illusion of moving in space. In that method we set two values, one for moving forward and backward, the second for rotating the view (all that gives FPP style navigation). We set those values in that method regarding to the Boolean values for keyboard that were pressed. For instance if KEY_DOWN for up arrow is true we set the move value to 10 if for down arrow we set it to -10, if none then value is 0. The second value we change while left and right arrows are involved. Now after setting the values we have to update the camera position which will be simply invoking the two methods using those two calculated values as parameters. We use them like that:

camera.moveForward(firstValue)

camera.rotationZ(secondValue)

So summarizing the simple navigation system we get three methods which will have to be invoked along with each frame that is rendered.

· calculateCamera – sets the values regarding to key flags that were set with key handlers.

· updateCamera – invokes two methods for camera (move and rotate).

· renderCamera – renders the image from the given camera.

Those three methods are of course inside the onEnterFrame method which is executed at framerate speed. To add the mouse navigation we will use some of the Sprite properties which hold mouseX and mouseY values. In our case they would have to be connected with camera Z and X axes. We tweak the rotation and movement speed with some fixed values. Outline of code should be more-less like that:

camera.roationZ = 0 – pv3DSprite.mouseX * 0.6

camera.roationX = 90 – pv3DSprite.mouseY * 0.2

And placed inside onEnterFrame method or any of three presented above (they are invoked by onEnterFrame after all). The onEnterFrame method generates an image that is put onto Sprite object which is connected with our Canvas3D component lying inside panel component in mxml file. Compile the whole application along with a simple kml file into swf file. If you now run the file in the browser you should be able to fly like a free bird among the KML buildings, inspiring view, isn’t it?

Now that we can see some primitives, the smartest move to improve the city view would be to develop a way for viewing 3D models which give much better visualizations and immersion level. There are plenty of modelers available these days, some are commercial and some are on free license. The one thing that connects them is that they accepted the Collada file as one of the standards that we save models to. Now as you might remember the Google Earth in its latest version supports the Collada file. And also as you might remember our speculations in the beginning of this document, the PV3D engine comes also with two formats available to adopt, ASE and DAE. So, where to start? First of all let us see how the Collada models are added into Google Earth.

In Google Earth press the Shift+Ctrl+M combination or go to Menu, Add, and choose Model. You will be presented with panel to enter the longitude and latitude values for model center. By default the values from the center point of the actual view are entered. Click browse and a File Chooser menu will pop-up for you to point a DAE Collada file model. If you now tilt the view you will see your model that you just added. One thing worth noticing while projecting models in Google Earth is that they need to have dimensions counted in thousands units (3D Studio Max case) in order to have natural size when placed in Google Earth environment. So let us now save it along with other polygons into Kml file. The most interesting things including the model tag and link to local DAE file is presented in Kml file structure very simply, and can be seen by opening the kml file structure.

Worth noticing is that inside this tag there are still some other very interesting options available (inside Model tag), like scale or orientation of model which we can use later. If we have that structure we go back to our Kml class and add a bit of ActionScript code. First of all we will need two arrays just like for Mesh3D objects. Here we will create a COLLADA_FILENAMES array with String objects in it (URLs to Collada files) and COLLADA_TRANSLATIONS which will hold translation vector from WORLD_PIVOT, just like it was made before. Then we add new tags to continue with recursion if a Model tag is found and lead us to Location tag where we can find the longitude and latitude values for actual model. We again use gnomonic projection to convert the coordinates to local area. If the Model is first object read from kml file, its latitude and longitude would be used as WORLD_PIVOT. We save the translation vector and the filename into our COLLADA arrays preserving the indices order. If we did everything properly we would end up with two new arrays that we can use within the PV3D engine class, just like we extracted Mesh3D objects from MESH_ARRAY.

Inside the PV3D engine class we look for the line code where we’ve put the loop for MESH_ARRAY. Now we have to do the same with COLLADA_ARRAY with few extensions. Let’s loop through the COLLADA_ARRAY elements and for each of them create a new Collada object (class available from PV3D package) using the String URL in constructor. Then we can use the translation vector from COLLADA_TRANLASTIONS array just like we did for Mesh3D objects and finally add the Collada object to rootNode. Sounds simple, and it is simple! But it’s not going to work that way, it will run but we are not going to see any of the models that we attached to kml file. The Collada files are XML structures that need to be parsed from file before the PV3D engine can access the data that they hold. It takes some time to parse the file. It is kind of slow, so that when PV3D wants to access the data, the data is not ready yet. We need to develop a loader for XML files that will inform us when the whole file is loaded into virtual structure and is available for us.I did the same while loading the Kml file in the beginning, so now its a good time to make an explanation to that process.

We will use the URLLoader class that can be found in AS class package provided with Flex SDK. The URLLoader class downloads data from given URL as text, binary or XML variables. The URLLoader object captures all the data before making it available to ActionScript. During the capturing process it dispatches events that we can catch and invoke proper actions (there are also properties like bytesLoaded, bytesTotal available to create progress bars for instance). To invoke the loading process from URLLoader object we need an URL object. The URL we use is URLRequest object with String filename from COLLADA_ARRAY as a parameter used in constructor. By default URLRequest works using HTTP GET method. However you can change that by setting the method property on the URLRequest object like that.

var request:URLRequest = new URLRequest(“hola.txt”);

request.method = URLRequestMethod.POST;

Next we fetch it to URLLoader load method as parameter. Then we have to add a listener that will listen for EVENT_COMPLETE event to our URLLoader and invoke proper method, onModelComplete in our case. The onModelComplete method would be responsible for several things, but no magic here, we just put the code for creating Collada objects from array, then the code for moving along the axes and adding the child to rootNode. We can also put some additional code, like model rotation before placing it on stage. The full structure of that loader can be found in walk3D.as source file (more detailed description of that process can be found in Adobe Flex Builder Help). Now just check if it works. If so, you should be able to see the models inside the city. The important thing is the scale here, as I mentioned before it depends on modeler that you are using, it should be much bigger (thousands units) to be placed inside the Google Earth. Then the size has to be diminished to use it inside PV3D engine. In KML class there is a COLLADA_SIZE parameter which you can use when using Collada object constructor (second parameter is scale) from PV3D bundle. By default this parameter is set to 0.03 so that models fit the scene. Again, you can tweak them if you are using other modeler than 3D Studio Max so that they fit the 3D world.



By Marcin Czech

lets have a 3D walk

We need to extend the knowledge of Flex to create a basis for the whole 3D application. However, the MXML files should only posses presentation layer not the logic itself. So we move all the logic to another AS class and just invoke it from main MXML file. Of course we would start with the Application tag and some Panel inside it to actually be able to view the data. But what we need to do next is to create a new graphical component to use for PV3D (remember that originally PV3D was developed for Flash not for Flex). Designing own components gives also great benefits which we will use. Probably the main reason of doing so is to make the component and all the logic that it will be doing as an independent part (PV3D visualization in this case). We would have a reusable component in that case that we can use several times, and easily port it to other projects for instance. In Adobe Flex 2.0 we can create components in several ways, depending on what we want to achieve. Before creating new component verify all the existing ones so that you don’t double the work. If you find something that you can use you can extend it (for example customized Button, Tree or DataGrid components). We would however focus only on developing components in ActionScript not in MXML which is also an option, but gives much less customizability. So practically you have three ways you want to create your component:


· Compile several components into one more sophisticated.

· Extend existing component, giving some additional functionality to it.

· Create a whole new component by extending the fundamental UIcompoenent class.


Of course as you can expect the third option on creating new components is most beneficial for us because we can mess with the code at the basis and this is the way we are going to design new UIcomponent for PV3D.

If you look at the component structure you will see that all the visual components are subclasses of UIComponent class. If you extend that class to create a new one, you will get all the methods and properties that the class was holding. The same happens here. The minimum for you to fulfill is to create a class constructor. Overriding the rest of the methods depends on what you want to get. There are some fundamental methods that component has. You don’t invoke those methods manually. There are invoked automatically in the component life cycle. Here are some of them (you can of course override one or few of them if you want, we will do that later).


· layoutChrome() – if your component would be serving as a container class this is where you can define a border area around the container. The method is invoked automatically when a call to method invalidateDisplayList() occurs.

· commitProperties() – commits the changes to component properties. Invoked with invalidateProperties() method.

· measure() – sets the size of the component, default and minimum. Invoked along with invalidateSize() method.

· createChildren() – if the component has any children, this is where they are created. For instance ComboBox control contains TextInput which would be created here. Invoked when addChild() method was used.

· updateDisplayList() – draws the component and the structure of its children on the screen using component properties. The parent container for the component determines the size of the component itself. Nothing is displayed until that method is invoked. invalidateDisplayList()is the method which calls it. The updateDisplayList() method is called exactly after next render event appears after calling invalidateDisplayList().


As I mentioned before the invocation of those methods is based on the lifecycle which describes the order of steps that are taken to create a UIComponent. All this lifecycle is happening behind the stage but it will be good for us just to have an outline how it works, when the methods are called, what events are dispatched and when the component is visible. Let’s consider a fairly simple example:


var container:Box = new Box();

var button:Button = new Button();

b.label = “Aloha World!”;

container.addChild(button);


In the first line the component constructor is invoked. Then some of its properties are set, nothing special at all. However the component setter method could already call invalidateProperties(), invalidateSize(), or invalidateDisplayList() methods. In the last line the component is added to its parent. After that Flex releases the flow:


· Sets the parent property of component.

· Computes the style settings.

· Dispatches the pre-initialize event on the component.

· Calls the createChildren() method.

· Calls invalidation methods (Properties, Size, DisplayList) so that they trigger calls to the methods described earlier.

· Dispatches the initialize event on the component. The component is not laid out yet.

· Dispatches the childAdd event on the container.

· Dispatches the initialize event on the parent container.


Next render event brings following actions:


· Call the commitProperties()

· Call the measure()

· Call layoutChrome()

· Call updateDisplayList()

· Dispatch updateComplete event on component.


When the last render event occurs Flex performs actions:


· Property visible = true. Makes it actually visible.

· Dispatches creationComplete event on the component (only once). Component is seized and processed for layout.

· Dispatches the additional updateComplete event which is also dispatched whenever position, layout, size or other visual properties are changed.


I know it might look a bit complex, so let me simplify this flow into just few steps that actually should be interesting for us and worth remembering. The events indicate when the component is created, plotted, drawn. I will put it graphically for you to get a better perspective.

pre-initialize

Initialize

creationComplete

updateComplete


Each of these shapes is an event that is dispatched during the component creation lifecycle. If it comes to a component that is a container which means that it has some other components inside it, the whole process is a bit extended (includes creation cycle for each child that container has). There is also an event that is dispatched at the very far end, after Application container tag has created everything. The Application object dispatches the applicationComplete event which is the last event during the startup.

So, after digesting all that knowledge first of all lets write a new UIComponent that would be a kind of canvas for our KML object. How we do that? We start with the ActionScript file and class clause that extends UIComponent class. Then we need to put the body, which will be in our case a Sprite object and Rectangle object. The Sprite Object as I mentioned in the technology chapter, is a substitute for MovieClip in new version of AS. The Sprite object is considered as a basic display list building block. It can be used as a Display Object or as a DisplayObjectContainer and posses children. As you might remembered we will also use this object because we want to create a Scene3D object from PV3D engine in order to keep all the objects inside. And Scene3D object uses Sprite object in its constructor. So this sprite will be exactly a display area for the three dimensional world. Another object, the rectangle object we will use to build a bounding box for view. We will create the rectangle depending on the specific container height and width values, so that we can organize a better layout structure of the application. The rectangle will be then sent to be the scrollRect object of IUIComponent, which is an interface of UIComponent and will automatically create a bounding box for us. If we have those two variables declared we then create a constructor (don’t forget to invoke super method inside constructor first).

In our case we will need to override just two of the functions from the UIComponent package. The first one is createChildren() method. As advised by Adobe we have to check in the beginning if the children were already created. If not then we create them, set their properties and finally use addChild() method. The reason of checking if children were created is for future purposes (further extensions of class). In our case we just create a new Sprite object and add it to the component. The second one is of course the updateDisplayList() which is responsible for actually drawing the whole structure (more detailed description for both methods can be found above).

We also set the Rectangle object boundaries with the parameters that were given to the method. These parameters we would set in mxml file while creating the whole application (width and height attributes in a tag). After that we can set the rectangle as a scrollRect object from UIComponent interface. We do that by simply putting a line:


scrollRect = clipRect


We created the UIComponent that we can use as a canvas and fetch it to PV3D engine. The next thing is a main mxml file that will hold the Application tag and all other components, including the one that we just designed. So we start with Application tag, we put a normal mxml namespace as attribute. We then add our namespace, where we hold our component class. Then we put a Panel Component to keep the layout (inside the Application tag of course) and as his child we add our component which will be Canvas3D class.


If we have an outline, we are ready to dig the details. First of all we will need id attributes for Panel and Canvas3D object. So we name them as “mainPanel” and “mainCanvas” respectively. The exact thing what we want to do is after the mxml file is laid out is, we want to pass the Canvas3D object to PV3D engine. Let us consider PV3D engine as an object for now (an AS class). How we do that? We use the applicationComplete event that is dispatched after the whole mxml file is laid out (last event that occurs) to invoke a PV3D engine object with our canvas as a parameter. To catch the object we put a new attribute into Application tag “applicationComplete” which will point at the method that we want to invoke after the event is dispatched. Just after the opening Application tag we put an ActionScript section which will have body of the method that we want to invoke. The AS section has to be wrapped with specific tags which I will show you in a second (so that they are not confused with mxml coding by compiler). The body of the function would have to create a new kml object from given URL (later we might use a text box to pass the URL) and a new PV3D engine object to which we pass our canvas object and created kml object (we pass them to constructor).


In the function we passed the canvas and kml object to the PV3D object that will take over the drawing logic of the whole world from those two ingredients. So now the thing that we need is that PV3D engine class that will bind the given kml data with the PV3D engine and draw things on the given canvas that will be visible later in swf file.

Let’s create a new class. In constructor we save the references to canvas and kml object that were passed. We also initialize the stage properties. To achieve the stage object (a main container of the whole application – like stage in old flash) we can use the static method of the Application class or just the stage property of canvas that we were given. A flash application has only one stage object. Every DisplayObject (object that can be put into display list) has a stage property which refers to the same stage object. We can set the world graphics quality on stage object using its quality property and static values LOW, MEDIUM, HIGH from StageQuality class. The next thing to remember is that we will need the stage listeners to handle events like keyboard hits for instance. So as for now we set three listeners which will be as follows. I explain in detail how they work later.


· onEnterFrame - would be a method responsible for updating the view. The event that it is going to catch this listener is ENTER_FRAME event which will be dispatched with speed equal to one that we set as a framerate for the whole application (in other words can be considered as frame render).

· keyDownHandler - would catch the KEY_DOWN event. Used for navigation with keyboard.

· keyUpHandler would catch the KEY_UP event. Same as above.


We add the listeners to the stage object in a following manner:


stageObject.addEventListener(‘event name’, ‘method to handle event’);


We are ready now to set the stage with some objects. First of all we create the Scene3D object which would be a main container for the world. As mentioned before we use Sprite object from our Canvas3D component. That Sprite we fetched to the constructor of the class that we are developing at the moment. We use it as one and only parameter in Scene3D constructor. After that we will have a DisplayObjectContainer3D (Scene3D class) based on given DisplayObjectContainer (Sprite class) from Flex Application.

Then we create a camera object. We use the FreeCamera3D object from the PV3D engine. The camera object takes several parameters. We will focus just on two of them. This is quite important issue because it affects the way how the objects would be visible, the clipping planes, etc. We will set the first parameter to 6 which is the zoom value. The second one would be the front clipping plane, in other words how close we can get to object before it vanishes. We set its value to 100. If we set the values like that we need to place the camera really far away which will be 5000 units away in this case. The values I gave here are result of my experiments, which give considerably good viewing results. Free feel to tweak them anyway you want.

We will need also a main root object which will hold all the rest of objects so that we would be able to rotate the whole world around its pivot for instance. We add the rootNode which will be a DisplayObject3D object from PV3D to our Scene3D object which is DisplayObjectContainer3D. If you look at the inheritance tree of AS3 you will find a very similar dependency of DisplayObject and DisplayObjectContainer (pure Flex) with DisplayObject3D and DisplayObjectContainer3D (PV3D engine). Understanding the basic structure of classes for setting a 3D world in Flex is quite important and easy, so let me quickly review again the dependencies. Creating the 3D world in Flex would be setting an object based on a DisplayObjectContainer (Sprite for instance), then creating a DisplayObjectContainer3D based on the latter object (viewing 3D is after all a sequence of 2D images). After that we add DisplayObject3D objects to our DisplayObjectContainer3D if it is connected with 3d graphics, and additional DisplayObject objects to DisplayObjectContainer if those are panels, text inputs or buttons. That way we can project a quite nice layout with component for viewing 3D graphics in it.

Now we add the rootNode (which is an empty object) just like any other child using addChild method to scene3D. After that all new children that we add, we add to rootNode. With the constructor we were given also a kml object which was filled with data from a file specified (statically for now) in main mxml file. So of course we have access to the all the data that is available from within kml object. For us, as we are ready to add any data that is suitable for PV3D engine, we will use MESH_ARRAY objects available by simply accessing the MESH_ARRAY array from kml object. So how we do that? We project a simple loop that would be going through all the elements of MESH_ARRAY and for each child we get the reference to mesh3D object, then translate it with the translation vector calculated before by Kml class and available from MESH_TRANSLATION array. We translate meshes using it’s translate method among each axis with given distance (we don’t need to translate along the z axis, (0,0,1) case). The algorithm should be something like that.


loop (i=0; i <>

ourMesh = kml.MESH_ARRAY[i]

distances =kml.MESH_TRANSLATION[i]

ourMesh.translate (distances.x , new Number3D (1,0,0))

ourMesh.translate (distances.y , new Number3D (0,1,0))

rootNode.addChild(ourMesh)


After that we have added all the meshes that were created within kml object to rootNode object. The next step is finally rendering the screen and actually seeing something. To do that we have to execute the renderCamera method from the Scene3D object. It takes a camera object as a parameter, which we have already created and added. This method however will have to be invoked not in initialization process but exactly inside the method that is responsible for updating the view which is in our case connected with the ENTER_FRAME event and method onEnterFrame. So that renderCamera method is invoked at framerate speed that we’ve set. So let there be light, we can see some objects! In next step we will set up a bird-eye camera and get 3d models inside...



By Marcin Czech

Tuesday, May 15, 2007

converting KML to 3D, part 2

As you remember we access the longitude and latitude values using a recursion loop. The third value in spliced string found in coordinates tag is altitude which is counted in meters and doesn’t need to be calculated in any specific way. For extruded polygon we will find latitude, longitude and altitude in KML file. The thing worth noticing here is fact that the coordinates tag doesn’t possess basis vertices of the polygon (longitude, latitude, 0) so we have to make a copy of them so that they stick to the ground somehow. For storing the polygons we will use dynamic array which will store arrays of vertices, that way we will have easy access to every single vertex (which can be useful in future for UV mapping for instance). Ok, to revise, at the moment we have a POLYGONS array which has a number of elements equal to coordinates tag found in KML file. Each POLYGONS element is array of vertices that build up the polygon found.
As you might notice we do all the measurements from local pivot (first vertex of polygon is its local pivot). If we find another coordinates tag, we do all the calculations again from 0,0 point. So if we put into KML dozen of polygons we will end up with polygons but placed one on another. To avoid that, we need to set a WORLD PIVOT and store the POLYGON vector translations from it. WORLD PIVOT will be the first local pivot ever found (which is first vertex ever found). The translations will be stored in POLYGON TRANSLATIONS array and their indices will be corresponding to the ones found in POLYGONS array. Calculations for these translation vectors are easy. We store the latitude, longitude pair of WORLD PIVOT and if the algorithm finds another polygon (coordinates tag), it will subtract its local pivot (first vertex) from WORLD PIVOT and store it in POLYGONS TRANSLATIONS array as a Number3D object. That way we store the data in much more customizable manner (model and its world position are independent objects).
The next thing is giving a volume for polygons (they are flat at the moment). We search the POLYGONS arrays and check if vertices have z value greater than 0. If so, it means that they are extruded. If not we leave them like that (can be used for streets for instance). The whole process for extruded polygons would be copying the vertices which changed “z” value to 0 and adding it to that POLYGON item. In other words, the extruded polygons vertices arrays length should be doubled and filled with modified copies of existing vertices. Now if we done that are ready to create faces for 3D meshes? Not yet, we have to scale the all vertices so that they will be more usable later. The default WORLD SCALE for x,y is 100,000,000 and it is different from the WORLD SCALE for z which is 10. This is of course because they are calculated differently: x,y are calculated from latitude and longitude while z is just the altitude parameter set in Google Earth. We scale by multiplying all vertex values by WORLD SCALE parameter that can be found inside KML class.
3D Face is a triangle made of three vertices. Faces comprise for a Mesh structure (triangulated 3D model) and actually make it visible on screen. By default only one side of face is visible, that gives better performance. If you look at Papervision3D structure you will see that to create mesh we need two things. First of them is a vertices package that will be used in that 3D model, in other words an array of vertices. The second one is the array of faces for that mesh. We already have the first ingredient, which are the items of POLYGONS array. The second one needs to be developed. Then we have to project a loop that will be going through all polygons found in POLYGONS array and create mesh objects for each of them and store them into MESH array.
To achieve faces array from vertices array we need to loop again, through all vertices this time, group them in thirds and push new face on the fly to temporary faces array, then use both arrays (vertices and faces) and create new mesh. The problem here is that we cannot take vertices just like that to create faces. It is connected with polygon triangulation algorithms which include advanced math. Let’s revise quickly our situation here. First of all we don’t need to implement advanced 3D triangulation algorithms like Delaunay triangulation algorithm because we are dealing here with 2D polygons or objects which always will be extruded polygons. The polygon triangulation algorithm would be really nice here, but let us limit at the moment to convex polygons, and I tell you why. Primarily we can write a simple triangulation algorithm without involving advanced calculations. Secondly, keep in mind that PV3D supports Collada models which we would like to use as well later for more complex models, so I suggest spending time somewhere else with coding and write just a triangulation for convex polygons. The native Collada class in PV3D for instance doesn’t need the triangulation algorithm because the models are already triangulated when exported from 3D modeling application. The vertices that we have in POLYGON array are sorted in a poly line (same sequence that we put in Google Earth). Taking that fact under consideration, the algorithm goes as follows:

1. Set index to 1.
2. Take the first vertex of polygon and mark it as main (vertices [0]).
3. Take vertices from array, “vertices [index]” and “vertices [index +1]”.
4. Create a 3D Face object from these three vertices.
5. Push the face to temporary array.
6. If vertices [index +1] is equal to main vertex, break the loop.
7. If not increment index and go to step 2.
8. Create a Mesh 3D object using vertices array and temporary faces array.
9. Store the Mesh in MESH array.
10. Take another polygon (vertices array) if found and go to step 1.

That way we have triangulated the roof of the cubic model into Mesh 3D model, if calculated polygon has some height or simple polygon if not. One thing more that we have to do here is to create faces for side walls of our models in a simple loop using copied vertices. Another thing to remember is to fetch vertices in clockwise order while creating faces, in order to be able to see them later (if you still can’t see them, try fetching them in other direction). After all these steps we should end up with MESH array and POLYGONS TRANSLATIONS array, both available from within KML class, ready to connect with Flex application and PaperVision 3D. Here are the most important parts of KML class body (more detailed information can be found in KML class Documentation provided).
Properties:
• KML_FILE : XML - *.kml files are nothing more than a specific XML files. This structure holds its structure inside KML class so that we can process it using E4X.
• POLYGONS : Array – each of its elements is another array consisting of vertex3D objects, parsed from KML_FILE structure and coordinates tag.
• MESH_TRANSLAITON : Array - preserves the translation of each mesh, calculated using local pivot and world pivot. Its order (array index) is equal to index in POLYGONS array.
• WORLD_PIVOT : Number3D – First vertex ever parsed is considered as the 0,0,0 point in world created. All the mesh translation values are calculated from that main point. Its values are latitude, longitude of first vertex and z = 0. There is also a Vertex3D version available.
• MESH_ARRAY : Array – Final array of Mesh3D objects created from POLYGONS array and temporary faces array.

Methods:

Kml(filename:String) – class constructor. Sets the KML_FILE object and starts the whole process of world creation.

calculateTranslation(world:Vertex3D, local:Vertex3D):Vertex3D – calculates the distance between given local pivot and world pivot. Returns a Vertex3D object as a result. The results are stored in MESH_TRANSLATION array.

gnomonicProjection(lambda:Number, fi:Number, z:Number, localPivot:Number3D, scaleXY:Number, scaleZ:Number):Vertex3D – method calculates the given longitude (lambda), latitude (fi) into a Cartesian x,y coordinates using the algorithm and equations described earlier. Additionally it scales the results automatically (put “1” for orginal values). Returns a Vertex3D object that can be used straight in 3D applications.

triangulatePolygonVertices(vertices:Array):Array – triangulates given array of vertices using gnomonic projection algorithm described before. Returns a faces array based on those vertices so that they can be used to create a Mesh. Used in createMeshes method.

createMeshes():void – takes the POLYGONS array and using latter triangulation method fills the MESH array with new objects, ready to use in PV3D.

Now that we have a class to represent the data, we can actually start creating an application which will be the home for our KML pet. Let’s go then.

Sunday, May 13, 2007

converting KML to 3D, part 1

the pdf version can be found here

Ok, so let’s put everything together into a workflow. We are setting out to create a fast way of creating real 3D worlds in Flex that can be playable within browser with Flash Player installed. We found Google Earth application as the most suitable form of providing data for our system. We want the user to be able to create a city - preserving real world coordinates, distances and building sizes in the fastest possible way. As we speculated before on the data - using Google Earth gives users ability to create polygons (using real world satellite image as a drawing layer) for any, I say any city found on the Google Earth surface. It can be part of New York City as well as clay houses in Bamako in Mali. So in fact it doesn’t matter if user wants to create a small village or University Campus of big city, it will take him the same amount of time (to be more specific it can be less than 10minutes!!!, it only depends on precision that user wants to achieve). Then after the polygons (and additional data such as height of buildings, placemarks, etc. – see KML reference in Technology chapter) are created we save them into KML file. This is the fastest and most intuitive way for normal user I found to create user customizable world data. KML is also commonly used as well among web society for showing users positions on the world for instance (Google Maps). What is more the polygon coordinates are not the only information that can be stored inside KML file, so we have more options to discover and make investigation on later. So how we are going to view the data from a KML file into a browser playable swf file? First of all to start, we will need something like a KML class that will serve as file parser and provide us with math to calculate the world positions from Google earth. That class will need to be implemented in ActionScript so that we can use it within Flex application that will produce that swf file. If we connect those two, then we can attach PV3D engine to the application and try to view the data. At that point we should have a 3D displayable KML data using swf file. Next we can create navigational system based on First Person Perspective camera (or like Windows Live Maps - Birds eye) and attach more options from KML files and other techs (like support for Collada files or using Web Services for instance) to the whole system. Ok, so let’s set sail into some ActionScript coding.

Before writing some code let’s consider a structure that we would like to have and that would be most universal for further functionality/extensions. The KML file structure as described earlier holds a tag coordinates, which is the most interesting data for us at the moment. Finding and getting a coordinates tag should not be a problem using E4X API in ActionScript 3.0. However we must take under consideration that users will draw more than one polygon for sure to achieve like a city representation. So what we are really interested in is getting all the polygon tags from the given KML file. So let’s have a look how exactly the coordinates tags are buried in the XML structure when more than one polygon is being saved.

see pdf ver.

That kind of structure can be repeated of course (multiple folders), and is marked with closing tags. There can be different numbers of placemarks inside each folder as well. We are at the moment not considering the children other than coordinates tag. However we want to have ability to add code later for manipulating data on other tags in an easy way (placemark->name tag for instance which can serve us as a name for mesh3D object). I think that the best way to construct the behavior like digging the whole structure and taking just those tags that are interesting for us, would be using a recursive loop. We would call the method like proccesNode for children of each child encountered on the way, analyze it and if we find that it fits one of the tags showed on the outline above then call the proccesNode method for the children of that child (recursion) until we dig to the desired coordinates tag. In that approach we won’t omit any coordinates tag and in addition we can easily add code for finding another tag. Now if the tag is found we take the coordinates data which is represented as String and slice it using blank space as a token. After that we have an array of Strings where each of them is a triple consisting of “longitude, latitude, height”. So again we slice it, now using “,” token and we have a point data to make calculations on it. Here is the outline algorithm of proccesNode method that in full size can be found in Kml.as source file.

function processNode(children:XMLList) {

1. Loop for all children.
2. Get child tag name.
3. if tag name equals coordinates or other that has data – calculate the data;
4. if tag name is parent tag which can possess the tag with data - execute the recursion by fetching the children of that child to processNode method;

We can easily add further pieces of code by placing another “case” statement for specific tag. So now if we have data of any given point (longitude and latitude), how to convert it to a local coordinates? To points like (120,34,10) for instance. The longitude and latitude values in coordinates tag are expressed by degrees. So for instance here you have the two points which are creating edge of the building drawn onto Google Earth raster layer. Points are represented in (DD,MM) format degrees.

-122.1079075592149, 37.40374393714917, 28
-122.1084678195176, 37.40306635543301, 28

As you probably noticed the differences between those points are counted approximately in values like 0.0001. So for instance let’s consider one polygon case, flat in addition (only x,y). We would set the first point encountered as a world pivot and set it to 0,0,0 (storing however his longitude and latitude values as let’s say real pivot) then from each next point that we find we would subtract the real pivot longitude, latitude values which will give us the translation vector values for that point. If we repeat that process for all points we will end up with array of points with values like 0,002345. Then we can multiply it by a specified scale value and here it is. We got the points translated into more readable coordinate system. Ok but let’s verify the actual results by drawing the both figures onto map. Let’s place the figure that we achieve from square placed in Berlin to area near equator (subtract without multiplication). The results can be seen on the images at the bottom.

see pdf ver.
Fig.1 : Square polygon drawn in the city of Berlin.

see pdf ver.
Fig.2 : The same polygon moved to equator area (0,0 – longitude, latitude). Subtraction used.

Now we didn’t expect that to happen, did we? This is called the distortion problem of world projection and is strictly connected with creating maps in old Cartesian way. If we want to handle that problem we have to look for the same ways that 2D maps are created. The way presented above with distortions is also used in some cases in cartography because of the simplicity of calculus. In other hand distortions on maps are something that will always be there, we can only diminish its influence, we will never be able to fully avoid them. So have that in mind as well.
So how actually the maps are created? What at first glance looks as a simple question after a while can turn into a more specific question “how the hell that can be done?”. Maps in the past times were “simple” to draw because people assumed that the world is flat. And it would be so much easier for our project if the fact that the world is a spheroid was never discovered.
We are interested in moving the chosen part of spheroid world to plane Cartesian local map. What we need is a conversion which is called a map projection system. The traditional maps are much more compact, they are also better for city calculations, better and easier to view on paper or computer screen. All that led to development of many advanced ways of calculating the real world to Cartesian plane. There are numerous map projection systems, and their usage depends on what we want to achieve, what district we want to make map for, what distortions we are able to accept. Those are questions that we need to answer. The most popular types are cylindrical map projections, which is as you probably expect a projection of a sphere onto cylinder (like turning on the bulb inside the sphere which is inside cylinder). While this projection can be found in many maps available, it is strongly advised not to use it because of the large distortions, especially with high latitudes. The equator areas are the ones where the distortions are minimal while the north and south poles areas cover even up to nine times larger fields than in reality. There are several types of cylindrical projections. The most famous is called a Mercator projection. It was discovered in 1569 by Flemish cartographer Gerardus Mercator and became the standard map projection for nautical purposes, especially because of its ability to represent line with constant bearing (better known us rhumb lines) as straight line segments, which is commonly used to steer the ship in sea navigation (the full name of the first Mercator map was “New augmented description of Earth corrected for the use of navigation”, and was precisely for navy purposes). It means that if you draw a straight line on the map which is Mercator projection, that line would fit the loxodrome (rhumb line) in the real world, which can be easily set as a sailing direction using compass. In that projection the meridians and parallels are straight and perpendicular to each other, creating a grid which represents the world. Farer we go from the equator, the greater the distortions are. Normally above the 70 degree of latitude the distortions cause the data to be practically unusable. In addition we will never be able to view properly North and South Poles with that projection for instance – since the projection goes to infinity in that case. It was adopted fully in 18th century. Mercator however was not the only one that contributing into that type of map projection, behind the stage there were few other mathematicians that made it possible. Here you have the mathematic equations for the projection.
The Cartesian pair x and y is given by these equations. Where λ is a longitude of a given point, θ is latitude and λ_0 is the longitude of central point on the map.

see pdf ver.

Mercator projection assumes that the equator is tangent to the surrounding cylinder. This is an old fashion way to translate world into two dimensions. How about the projections that are used these days for creating maps? What is the modern approach on that issue?
As I said before there is a whole bunch of map projections that can be used, it mostly depends on the circumstances that we are in. However among them there is one considered to be a standard. If we have a projection now that can be considered as a standard, countries can easily merge areas in-between and standardize cooperation on international level (air traffic for instance). The projection that I’m talking here about, I’m sure you’ve heard it somewhere, is UTM which stands for Universal Transverse Mercator and as you probably figured it out it is an extended version of Mercator described previously. If you can imagine the simple Mercator, what would you change to make it more suitable for your specific purposes? For instance, if making map of the Leningrad area, the most suitable for you would be to move it exactly to equator area (minimal distortions). You can achieve that by moving the tangent circle to be not the equator but the meridian of your choice. That map projection onto cylinder with switchable tangent meridian is called Transverse Mercator. When using this kind of projection you will get a narrow area along the meridian of your choice with minimal distortions. That solution is much better in reality because although it diminishes the area that can be projected (North/South Poles are projected very well if using meridian, but some of the areas corresponding to normal poles position are not), it actually gives you choice which part of the world you can translate into Cartesian coordinates with minimal distortions (not only the equator areas). It’s really useful while projecting maps for countries like Chile. Distances like 5 degrees from meridian are giving 0.4% larger scale, for 10 degrees the value is 1.54%. This method based on Mercator of course, was first presented in 1772 by Johann Heinrich Lambert (with the spherical basis) and later by Carl Friedrich Gauss in 1822 using ellipsoid as a basis. Ok, now that you know what a Transverse Mercator is, let’s go further with it so that you see in outline how UTM (Universal Transverse Mercator) was founded. So hundreds of years passed again and the more sophisticated navigation systems for measuring distances needed to be invented, especially for military purposes. In 1947 United States Army develop UTM which is a type of grid based projection which means that it is a bunch of individual projections packed into one graphic representation. The UTM is separated into 60 individual parts (cylindrical projections) each encapsulating areas from latitude 84 degrees North to 80 degrees South and each of them 6 degrees wide (longitude). These zones are numbered from one to sixty. First is bounded by longitude 180-174 and centered on 177th West meridian. Zone numbers increase to east. As you probably expect each of these stripes is individual Transverse Mercator Projection, which used only in 6 degrees wide area gives us very little distortions and what is more they are diminished much more by taking the scale factor in the central meridian to 0.9996 so that the distortions are really minimal in the whole area. Notation of that system is based on the zone number and latitude letter. While the zone number is an easy guess, the latitude letter is a letter given from English alphabet. Each letter corresponds to area 8 degrees high starting from North Pole. Letters O, I are omitted because of the similarity to 0,1 values and the last X is extended by 4 degrees. The nice trick to remember is that N stands for first area of south hemisphere. A position referenced in the UTM system is as follows: UTM longitude zone, easting/northing coordinate pair. The easting is the distance from zone meridian while northing is distance from equator. To avoid dealing with negative numbers, easting for each zone is valued 500.000 meters on the centre, decrements to the west and increments to the east. The northing is valued 0 on the equator and increases to the north till 9,328,000 meters on the 84th parallel. As you go south form equator the northings decrease starting from 10,000,000. That way none of the points will have negative values and we have the way of translating the longitude and latitude to Cartesian map that we need. Let’s now have a look at the equations:

see pdf ver.

Where x, y is a pair of Cartesian coordinates after conversion, θ is latitude and λ is longitude. Now how to implement the UTM system to our solution? We would have to calculate it using the equations then use some parsing for getting the correct zones – but let me say something. If you look a bit deeper it’s too much, maybe not complicated but advanced and still possess some distortions if we consider distances like 2 feet long. So after all maybe using UTM would not be such a good solution for us, as it is mainly for representing large scale maps with minimal distortions. In our solution we want to get a representation of cities, university campuses, etc, in other words not the whole world or areas of country size. It would be good to stick with UTM standard, but as for now, with limitations that we are given with 3D in browser we should satisfy and focus ourselves with cities and therefore some other more specific projection type. However UTM can be implemented and used to represent world in 3D for purposes like route 3D map on airlines website. It would only consist of simple plane (UTM country or world map) and some boxes (airports) plus aircraft collada model for example, all visualized in a very fancy manner. So what I suggest is this time to use some projection that is based on a local area rather than on meridians, equators etc. I didn’t have to look far to find it, because such a projection is probably one of the oldest existing map projections and it’s reasonably simple as well. Let me introduce you to the gnomic projection.
Gnomon is an ancient Greek word meaning “indicator”. That name was given to the part of sun-dial that casts the shadow. Now try to correlate it with map projections, use your creativity and you get an idea what gnomonic projection is. Thales used his imagination and founded this oldest known map projection in 6th century BC. Ok, imagine the plane tangent to the sphere in only one point. The map is obtained by projecting the points laying on the sphere surface from the sphere centre onto the plane surface. As you can see on the picture below the projection can be used only for one hemisphere at once (if not - antipodal points would lay one on another). The tangent point can be set wherever we want it to be, which is exactly what we were looking for. Imagine now for instance that we are going to set it as a first point ever found in KML file. That way, distortions would be almost none in our local area (city size map). The specific thing about gnomonic projection is that it always will show great circles as straight lines, and because of that ability it is commonly used in fields like seismology or military (seismic waves and radio signals tend to travel along great circles). The projected map has huge distortions if you would look at its full hemisphere view. It is exactly like looking through spherical lens. However if you want it to project a map for country size region (not Russia or US of course) you can go for it. Just place the tangent point in the center and you will have a pretty realistic projection. And as you remember, we are mainly interested in city size maps, so in other words we will have huge space reserves for our local city and really minimal distortions in addition. It would be perfect for us with its simplicity as well. Here is some math for gnomonic projection:

see pdf ver.

Where x, y corresponds to Cartesian coordinates after conversion; θ, λ is latitude and longitude of given point; θ_0, λ_0 are latitude and longitude of central point; c is an angular distance between actually measured point and the central point of projection, its value is given by equation:

see pdf ver.

There are again variety of modifications with that plane projection (plane intersecting the sphere, gnomon point in infinity), but we really don’t need to make things more complex, that one is more than enough for our calculations and if you check the shapes that you get from that projection you won’t see a difference.

by Marcin Czech