Sunday, September 2, 2007

compiled version of work and new project

I've added download section on the right where you can find a compiled and revised version of the project (120 pages, pdf). It covers posts starting from the beginning of this blog up till June.

In last month I also set up new project with similar topic of using GIS, 3D and Flex in social networks and game industry. Here is a link : GIS with Flex 3D in game industry and social networks

Happy reading!

Saludos!

Marcin

Monday, August 6, 2007

simplifying user experience // PaperVision 1.5 AS 3.0

There is a brand new release of PaperVision 3D 1.5 made fully in Action Script 3.0. Really nice. Performance is expected to speed up to 40%. The engine is growing and growing making its way to provide great, free 3D experience. You can download full API from www.blog.papervision3d.org. Enjoy it!

Also during my coffee break I thought and wrote a bit about simplifying user experience. You can make a quick read here in pdf format or in the post below.

We are always in pursuit to simplify things. The word “simplify” is a key word if it comes to all processes in company life. The simplified flow of information, without interference of unnecessary objects can bring better understanding of what and why is happening, highlight drawbacks of our organization and bring us nothing but profits.

Few days ago I had an idea of simplifying the process of data creation in the project. To create a 3D world that we desire we need to build it from existing raster maps in Google Earth, that was the intention because we assumed that user wants to create a 3D environment for a specific area that he wants to use on the web page. For instance to show his neighbourhood, university campus, etc he must locate the area in Google Earth, then draw polygons and save them to kml file. My idea here is very simple and comes from the question “what if user is not interested in data creation?” It means that we have to provide him with data and what is more, give him universal interface to achieve that. In other words we want our solution to work more like a Google Maps rather than local solution, users get the data on the plate, and they don’t need to create it. So if the user specifies a point on the earth (any point) we will give him that area in 3D, without data creation, without KML file creation, just by specifying the longitude and latitude coordinates. Now, one of the possible ways to achieve that is of course to use existing sources from internet as we need to create data on the fly. Let me first show you on a simple outline what I have on my mind.
The only thing that user needs to do in that approach would be just to specify the longitude and latitude values that he is interested in. That would make the solution more universal and surely unburden user intervention to achieve what he wants. Additional parameters could be the boundaries of the area for instance. Ok, that sounds great, but how actually we will create 3D data for that area (the x box in diagram) and return it in playable swf file?
Now first thing that popped up in my mind is image recognition algorithm. If we used Google Earth before to draw polygons on building roofs, on a raster... maybe we would be able to make this process automatic by using image recognition algorithms. We have high quality images that we can achieve from Web Services or Google Maps, the buildings are mostly square and the edges visible. As we can expect it would be really difficult because images that we are interested in are mostly urban areas with high complexity of objects. However I would not be worried by that as we can use Google vector maps instead. Now, if you dig internet on that subject you should find some very interesting whitepapers like “Building Detection from High-resolution Satellite Image Using Probability Model” by LIU Wei and Veronique PRINET. By specifying features vectors and using probabilistic theory they are able to distinguish building contours (process also involves Douglas-Peucker line approximation algorithm). The results are quite
interesting and can be seen in a picture.

White lines mark the recognized parts of buildings. By tweaking the algorithm we could achieve 60-70 % reality of buildings shapes, without taking its height under consideration of course. We
cannot achieve that while examining the raster image. Another thing is that computation takes time, approximately 3 minutes which is a lot of time if it comes to acquire response to such a inquiry, not to mention the time needed later to create 3D environment swf file from it. Another problem I find is that the environment that user will get is not exactly the thing that he expects (not precise and without height). This is not really what I would like to get as a simple user… even if I would have to wait 10 minutes I would expect more usable result. So the more logical step will be to use the image recognizing algorithm locally on a Google Earth raster and then tweak the polygons manually. However that requires intervention of user and is not really what we wanted to achieve. So maybe there is another way? Let’s see again the outline of the flow.
The only reasonable solution here would be, instead using image recognition algorithm, find data
sources that are already in XML format (exclude the Image recognition box from sketch above). That way we would achieve a fast and universal flow for creating data. In next post I’ll try to find that type of resources available in internet o public.

Saludos
By Marcin Czech

Thursday, July 19, 2007

Vamos a la playa

I would like to thank for all e-mails I got with tips and inspiring ideas. I just graduated at Universidad Politecnica de Madrid with Master degree. The project that I've presented "Developing GIS with Adobe Flex in 3D" was given a "Matricula de Honor" grade, highest possible GPA so Im really happy with that. I also just came back from my 3 week holidays at Costa de la Luz in Spain, I walked all the way to Tarifa :) just sand, sea and people. So.. charged with new batteries, back to my home in Poland with diploma on my hands, after good lunch Im ready to do some more thinking. Thanks again!

Saludos

Marcin

Saturday, June 9, 2007

The Conclusion and future ideas




So at this point we have a working Flex application which does exactly what we wanted it to do. In other words it makes possible to view 3D in Flash Player and attach real world data to it, even using 3D models instead of primitives and all that within the browser without downloading any plug-in (assuming of course that Flash Player is present). Also as we speculated in the very early phase of this document, it turned out that the simplicity and ease of use can be preserved while upgrading this solution for a more specific goal. As it comes to a practical usage of that kind of solution, it cannot be confirmed at the moment. But if you look at it closely and then look at the abilities that the technologies that were used here give, you might get a bunch of ideas where to use that kind of solution after doing some tweaking. The solution itself as I see it now, is an outline of the application, a basis that can have its functionality enhanced in many ways to create desired, more specific application. However like the example from Anjali Bhardwaj in one of the first chapters, the solution shows that that kind of connection is fully applicable and one can create a new way of representing data in the browser using latest technologies like Flex merged with those from the niche like PaperVision3D.

During the development of that project I’ve seen many new solutions that popped up to more less satisfy users with more immersive way of viewing data. Google, which is as I said before a really fast growing company tries to get some new functionalities to Google Maps, whenever it is possible. It’s a really good thing to try, much better than being idle. It is just a matter of time when they hit the jackpot with a really usable and practical way of data viewing, to provide people with better world understanding and make them a bit addicted to that kind of solutions. On the PaperVision3D mailing list few weeks ago I’ve heard about the idea of using Flash inside Google Maps. It is a really nice thing that they did, here is the link http://maps.google.com/help/maps/streetview/index.html . It is a 3D environment based on the sequence of photographs taken from the Google truck (most probably), merged to create 360 degree really nice looking panoramas. Sure, it is a really fascinating solution but it is still a fake 3D, and as far as we can recall the history the next step would be real 3D solution. The thing here that I find the most interesting is that Google used Flash Player for it. Which is surely a good sign for our project, it means that we kind of predicted the Flash Player usage in this field although they tried Flux player in the beginning as you might remember. Maybe they realized how the barrier of the new plug-in is hard to jump over. What people look for in that kind of solutions, that is a question, do they really need the real 3D immersion? The solution that Google just had shown is based on photography, hence much more realistic. It is fast, intuitive and really educating for society. It has all the ingredients that are needed to make people speak about it. I don’t know the details how the solution was achieved but if the data gathering is reasonably easy, it would last for a long time till, the real 3D in a browser evolves to such a level that it can be put instead. The point that I’m making here is that it is the next step, and for me a kind of obvious thing is that real 3D solutions must overtake sooner or later the existing way of world representation in the browser. I have no doubt about it. It is also possible to merge realism of Google Street view with 3D freedom that is given by 3D engines, why not. The solutions that are appearing now are just to fill the time gap till the real 3D worlds arrive, that is how I feel about it.

So if I say that real 3D will be a big boom in browsers, the next question that I should ask myself is about Papervision3D and its future. Is it capable of taking all the desires that people want? With my three months spent on its mailing list, and looking at the events around this open source engine I must say I’m impressed. However the way to achieve the leading role in 3D worlds in internet is still far away. The strong point that Papervision3D has is surely a very strong core team and vivid community. Just to show you that I’m not a person that lets emotions speak my mind, I will tell you three facts which I find very convincing. The first one is that I get average of five mails daily (each consisting of several threads) from the PaperVision3D mailing list, covering many issues. The second one is that the core team just merged with a strong Away3D team, and still some new designers are joining. The third one is the international arena which will be Flash 3D presentations and case studies made by PV3D core team which create awareness among society about this 3D solution and finally the award that was PV3D given few weeks ago in two categories: “People Choice” and “Experimental” at the FITC 2007 Awards. So if Papervision3D is doing so good I think when more and more people start hearing about it, and what will be better – using it, some big companies might get involved, and that is exactly the way which is PV3D going now. So in my humble opinion it is quite possible that in the near future PV3D engine will be used in Google Maps for instance, or with ESRI ArcWeb Explorer which has the whole structure based on Flex and Flash Player and probably would have better integration ability.

So where is the solution that we’ve just created going to? Well I think it had opened many doors to follow. Especially when you look at the functionality that Flex environment gives. As I said consider it as a basis which however can be used in its present form if primitives and models are enough for whatever you are building. For instance it can be used to place a 3D environment of the University Campus into swf file which then can be hosted inside the HTML University web site as a playable swf movie to show the campus surroundings. Another option is your city and city web page, whatever is possible to draw in Google Earth you can put inside the flash movie. I will now give you some of my thoughts about the future features of this solution and try to point out some problems.

First task that would be most preferable I think is to make the whole appearance more vivid. And by that I mean giving the 3D objects textures so that they appear more as a real world. As planned before you have Mesh3D objects placed separately in the MESH_ARRAY inside our KML class. You can attach material to them later or while creating the object in the constructor (first parameter is MaterialObject3D). For that however you will need to plan images that you would like to use and plan UV mapping for each object. If you are planning the representation of a 3D campus for instance it would be a good idea take a walk through your campus with digital camera to give it a real world texture. However simple building textures from game industry should do the job as well so that the view would be more fascinating. The good news is also that if you have 3D Collada models you have also support for their materials from Papervision3D engine, so it’s also possible and you can make it in a much more professional manner inside the DCC tools rather than by attaching them manually to objects. Google Earth and kml files of course make it possible to view and store models with materials as well.

So if we can make world more vivid why not to go further and add true ground images. Now how to do that? I mentioned before about a dynamic solution for providing data, and that would be WebServices. Some of the web services or even Google Maps requests give you ability to acquire the satellite picture of the area which you specify by latitude and longitude boundaries. Sounds familiar? So if we have Web Service with such ability (and I know that they exist), nothing easier I say. You take the WORLD_PIVOT object then you add and subtract specific values to get the local bounding box, you send them as parameters to the Web Service and you get as a response a satellite image (jpg for instance) for that area. Then you put a plane object (descendant of Mesh3D class) on the scene and attach received image as a material. You should get an image with enough quality to see the houses. That effect should look really nice and work for any area that you specify. If you find a Web Service with a weather conditions based on the same rules you can put another plane for the sky and create true weather images, sounds inspiring.

Now let’s open the pot with Flex abilities. First of all because I also see a future for gaming industry in GIS market (I’ll speak about it later) I would give an idea of connecting several people inside one virtual city created of course in Flex + Papervision3D technology so that they can see each other as bitmap avatars let’s say. Now, that option I believe can be done by using the Adobe Flex messaging system which is created for multi-navigational systems (see Flex technology chapter). If we go further with that idea we can put a chatting area for people to write messages to each other. Why not to go further? What about video streaming and video chats, Flex also supports that. I know it can be slow because of the bandwidth but for instance imagine a University Campus local network which uses that kind of solutions to go around the virtual campus. Video would be also a good idea for making advertisements inside the virtual city. I already have seen an example of Papervision3D plane with an avi clip on it. Try to imagine a virtual city where people go around like in reality, companies would surely pay money for this infinite space for billboards. But what would be worth this city if the interaction with the environment wouldn’t be possible. To fix that we could give a mouse interaction with 3D objects. You think that would be not practical? During the development one of the community members had shown an example of 3D interaction which I was truly amazed with. You can find it here: http://www.lepers.info/test/pv3d/DrawBall.html . Now making a mouse click on a specific object would not be a problem and then you could open the shops web site, or display shop information in the little box attached, or a movie clip about this building. The other type of interaction would be for instance while going through the city and entering the area next to the monument or some important thing (or simple shop) and it would display detailed information about it in the specific panel. That can be done be hit test methods and intersection checking inside the DisplayObject3D class. There is even an example FlexFocus that comes with PaperVision3D source which shows how it works. Let’s go further and imagine for instance that you got really close to the entrance to the 3D model of a building, and if the plane area on the ground spotted that you are there (by checking the intersection with objects mentioned before) it could load another swf which would be responsible for insides of the building, now that sounds like fun.

I really believe that Web Services are the future for exchanging the data between applications. And in no time they are going to be very common and offer much more sophisticated data, these days they are still considered as a new thing. ESRI as far as I know has a Web Service that offers a swf file that shows the area for specific latitude and longitude values, we could use that dynamically received file to make a 2D navigational map of the area. I can still remember from my childhood how useful a small outline map in a corner while playing strategy games was. Same can be used here as a helper in the 3D navigational system. Web Services can also provide us with the information about the area, like a text about the specific building or an image, so we don’t have to host the data in our server and that is what really counts in that case. You create just the outline of the environment (polygons and models) and rest data is delivered automatically by Web Services to the view (terrain, building information, etc.), the idea of distributed objects is really inspiring. I mentioned before about the idea of gaming industry merging with the GIS market, well it will happen in near future I guarantee. For games like Sims and SimCity it will happen very soon, people just like to watch the life on the screen too much. But the real challenge will be for instance true 3D car simulators where you can go all around the globe like in reality using GIS systems, believe it will happen. I think that in the end the solution I presented can be really a basis for the bunch of ideas that popped up while doing this research and also gave you an outline how the whole 3D situation looks like if it comes to the browser. By showing this maybe we can push this world a bit forward. I know that probably there are much more fields that this solution can be used for. At the moment I’m just limited to what I was using it for in my way of thinking. So, if you find this project interesting for your purposes well, I will quote one of the Papervision3D members: “If my bullet fits your gun… shoot it!”.


By Marcin Czech

Monday, May 28, 2007

models instead of primitives


The same world drawn on Google Earth sattelite layer, and the same world moved to swf Flash file using Kml as data. Distances are preserved. Two visible models (torus and cone-cube) were modeled using 3D Studio Max and placed into Google Earth.

Now in the last post I described how to render simple view, however static image is not so fascinating, it just shows that the idea works. Let’s give some moves to camera. We have to implement a simple walking, observation mechanism based on keyboard and mouse. Remember two other listeners that we implemented? Here is the place where we are going to make use of them. DisplayObject3D object like camera have already implemented methods like moveForward, moveBackward, so what we do is create Boolean variables for all four arrows. Set them true when KEY_DOWN event is dispatched, false if KEY_UP. Then we create a method which we will invoke every time before renderCamera is invoked, so that the camera view changes and we get the illusion of moving in space. In that method we set two values, one for moving forward and backward, the second for rotating the view (all that gives FPP style navigation). We set those values in that method regarding to the Boolean values for keyboard that were pressed. For instance if KEY_DOWN for up arrow is true we set the move value to 10 if for down arrow we set it to -10, if none then value is 0. The second value we change while left and right arrows are involved. Now after setting the values we have to update the camera position which will be simply invoking the two methods using those two calculated values as parameters. We use them like that:

camera.moveForward(firstValue)

camera.rotationZ(secondValue)

So summarizing the simple navigation system we get three methods which will have to be invoked along with each frame that is rendered.

· calculateCamera – sets the values regarding to key flags that were set with key handlers.

· updateCamera – invokes two methods for camera (move and rotate).

· renderCamera – renders the image from the given camera.

Those three methods are of course inside the onEnterFrame method which is executed at framerate speed. To add the mouse navigation we will use some of the Sprite properties which hold mouseX and mouseY values. In our case they would have to be connected with camera Z and X axes. We tweak the rotation and movement speed with some fixed values. Outline of code should be more-less like that:

camera.roationZ = 0 – pv3DSprite.mouseX * 0.6

camera.roationX = 90 – pv3DSprite.mouseY * 0.2

And placed inside onEnterFrame method or any of three presented above (they are invoked by onEnterFrame after all). The onEnterFrame method generates an image that is put onto Sprite object which is connected with our Canvas3D component lying inside panel component in mxml file. Compile the whole application along with a simple kml file into swf file. If you now run the file in the browser you should be able to fly like a free bird among the KML buildings, inspiring view, isn’t it?

Now that we can see some primitives, the smartest move to improve the city view would be to develop a way for viewing 3D models which give much better visualizations and immersion level. There are plenty of modelers available these days, some are commercial and some are on free license. The one thing that connects them is that they accepted the Collada file as one of the standards that we save models to. Now as you might remember the Google Earth in its latest version supports the Collada file. And also as you might remember our speculations in the beginning of this document, the PV3D engine comes also with two formats available to adopt, ASE and DAE. So, where to start? First of all let us see how the Collada models are added into Google Earth.

In Google Earth press the Shift+Ctrl+M combination or go to Menu, Add, and choose Model. You will be presented with panel to enter the longitude and latitude values for model center. By default the values from the center point of the actual view are entered. Click browse and a File Chooser menu will pop-up for you to point a DAE Collada file model. If you now tilt the view you will see your model that you just added. One thing worth noticing while projecting models in Google Earth is that they need to have dimensions counted in thousands units (3D Studio Max case) in order to have natural size when placed in Google Earth environment. So let us now save it along with other polygons into Kml file. The most interesting things including the model tag and link to local DAE file is presented in Kml file structure very simply, and can be seen by opening the kml file structure.

Worth noticing is that inside this tag there are still some other very interesting options available (inside Model tag), like scale or orientation of model which we can use later. If we have that structure we go back to our Kml class and add a bit of ActionScript code. First of all we will need two arrays just like for Mesh3D objects. Here we will create a COLLADA_FILENAMES array with String objects in it (URLs to Collada files) and COLLADA_TRANSLATIONS which will hold translation vector from WORLD_PIVOT, just like it was made before. Then we add new tags to continue with recursion if a Model tag is found and lead us to Location tag where we can find the longitude and latitude values for actual model. We again use gnomonic projection to convert the coordinates to local area. If the Model is first object read from kml file, its latitude and longitude would be used as WORLD_PIVOT. We save the translation vector and the filename into our COLLADA arrays preserving the indices order. If we did everything properly we would end up with two new arrays that we can use within the PV3D engine class, just like we extracted Mesh3D objects from MESH_ARRAY.

Inside the PV3D engine class we look for the line code where we’ve put the loop for MESH_ARRAY. Now we have to do the same with COLLADA_ARRAY with few extensions. Let’s loop through the COLLADA_ARRAY elements and for each of them create a new Collada object (class available from PV3D package) using the String URL in constructor. Then we can use the translation vector from COLLADA_TRANLASTIONS array just like we did for Mesh3D objects and finally add the Collada object to rootNode. Sounds simple, and it is simple! But it’s not going to work that way, it will run but we are not going to see any of the models that we attached to kml file. The Collada files are XML structures that need to be parsed from file before the PV3D engine can access the data that they hold. It takes some time to parse the file. It is kind of slow, so that when PV3D wants to access the data, the data is not ready yet. We need to develop a loader for XML files that will inform us when the whole file is loaded into virtual structure and is available for us.I did the same while loading the Kml file in the beginning, so now its a good time to make an explanation to that process.

We will use the URLLoader class that can be found in AS class package provided with Flex SDK. The URLLoader class downloads data from given URL as text, binary or XML variables. The URLLoader object captures all the data before making it available to ActionScript. During the capturing process it dispatches events that we can catch and invoke proper actions (there are also properties like bytesLoaded, bytesTotal available to create progress bars for instance). To invoke the loading process from URLLoader object we need an URL object. The URL we use is URLRequest object with String filename from COLLADA_ARRAY as a parameter used in constructor. By default URLRequest works using HTTP GET method. However you can change that by setting the method property on the URLRequest object like that.

var request:URLRequest = new URLRequest(“hola.txt”);

request.method = URLRequestMethod.POST;

Next we fetch it to URLLoader load method as parameter. Then we have to add a listener that will listen for EVENT_COMPLETE event to our URLLoader and invoke proper method, onModelComplete in our case. The onModelComplete method would be responsible for several things, but no magic here, we just put the code for creating Collada objects from array, then the code for moving along the axes and adding the child to rootNode. We can also put some additional code, like model rotation before placing it on stage. The full structure of that loader can be found in walk3D.as source file (more detailed description of that process can be found in Adobe Flex Builder Help). Now just check if it works. If so, you should be able to see the models inside the city. The important thing is the scale here, as I mentioned before it depends on modeler that you are using, it should be much bigger (thousands units) to be placed inside the Google Earth. Then the size has to be diminished to use it inside PV3D engine. In KML class there is a COLLADA_SIZE parameter which you can use when using Collada object constructor (second parameter is scale) from PV3D bundle. By default this parameter is set to 0.03 so that models fit the scene. Again, you can tweak them if you are using other modeler than 3D Studio Max so that they fit the 3D world.



By Marcin Czech

lets have a 3D walk

We need to extend the knowledge of Flex to create a basis for the whole 3D application. However, the MXML files should only posses presentation layer not the logic itself. So we move all the logic to another AS class and just invoke it from main MXML file. Of course we would start with the Application tag and some Panel inside it to actually be able to view the data. But what we need to do next is to create a new graphical component to use for PV3D (remember that originally PV3D was developed for Flash not for Flex). Designing own components gives also great benefits which we will use. Probably the main reason of doing so is to make the component and all the logic that it will be doing as an independent part (PV3D visualization in this case). We would have a reusable component in that case that we can use several times, and easily port it to other projects for instance. In Adobe Flex 2.0 we can create components in several ways, depending on what we want to achieve. Before creating new component verify all the existing ones so that you don’t double the work. If you find something that you can use you can extend it (for example customized Button, Tree or DataGrid components). We would however focus only on developing components in ActionScript not in MXML which is also an option, but gives much less customizability. So practically you have three ways you want to create your component:


· Compile several components into one more sophisticated.

· Extend existing component, giving some additional functionality to it.

· Create a whole new component by extending the fundamental UIcompoenent class.


Of course as you can expect the third option on creating new components is most beneficial for us because we can mess with the code at the basis and this is the way we are going to design new UIcomponent for PV3D.

If you look at the component structure you will see that all the visual components are subclasses of UIComponent class. If you extend that class to create a new one, you will get all the methods and properties that the class was holding. The same happens here. The minimum for you to fulfill is to create a class constructor. Overriding the rest of the methods depends on what you want to get. There are some fundamental methods that component has. You don’t invoke those methods manually. There are invoked automatically in the component life cycle. Here are some of them (you can of course override one or few of them if you want, we will do that later).


· layoutChrome() – if your component would be serving as a container class this is where you can define a border area around the container. The method is invoked automatically when a call to method invalidateDisplayList() occurs.

· commitProperties() – commits the changes to component properties. Invoked with invalidateProperties() method.

· measure() – sets the size of the component, default and minimum. Invoked along with invalidateSize() method.

· createChildren() – if the component has any children, this is where they are created. For instance ComboBox control contains TextInput which would be created here. Invoked when addChild() method was used.

· updateDisplayList() – draws the component and the structure of its children on the screen using component properties. The parent container for the component determines the size of the component itself. Nothing is displayed until that method is invoked. invalidateDisplayList()is the method which calls it. The updateDisplayList() method is called exactly after next render event appears after calling invalidateDisplayList().


As I mentioned before the invocation of those methods is based on the lifecycle which describes the order of steps that are taken to create a UIComponent. All this lifecycle is happening behind the stage but it will be good for us just to have an outline how it works, when the methods are called, what events are dispatched and when the component is visible. Let’s consider a fairly simple example:


var container:Box = new Box();

var button:Button = new Button();

b.label = “Aloha World!”;

container.addChild(button);


In the first line the component constructor is invoked. Then some of its properties are set, nothing special at all. However the component setter method could already call invalidateProperties(), invalidateSize(), or invalidateDisplayList() methods. In the last line the component is added to its parent. After that Flex releases the flow:


· Sets the parent property of component.

· Computes the style settings.

· Dispatches the pre-initialize event on the component.

· Calls the createChildren() method.

· Calls invalidation methods (Properties, Size, DisplayList) so that they trigger calls to the methods described earlier.

· Dispatches the initialize event on the component. The component is not laid out yet.

· Dispatches the childAdd event on the container.

· Dispatches the initialize event on the parent container.


Next render event brings following actions:


· Call the commitProperties()

· Call the measure()

· Call layoutChrome()

· Call updateDisplayList()

· Dispatch updateComplete event on component.


When the last render event occurs Flex performs actions:


· Property visible = true. Makes it actually visible.

· Dispatches creationComplete event on the component (only once). Component is seized and processed for layout.

· Dispatches the additional updateComplete event which is also dispatched whenever position, layout, size or other visual properties are changed.


I know it might look a bit complex, so let me simplify this flow into just few steps that actually should be interesting for us and worth remembering. The events indicate when the component is created, plotted, drawn. I will put it graphically for you to get a better perspective.

pre-initialize

Initialize

creationComplete

updateComplete


Each of these shapes is an event that is dispatched during the component creation lifecycle. If it comes to a component that is a container which means that it has some other components inside it, the whole process is a bit extended (includes creation cycle for each child that container has). There is also an event that is dispatched at the very far end, after Application container tag has created everything. The Application object dispatches the applicationComplete event which is the last event during the startup.

So, after digesting all that knowledge first of all lets write a new UIComponent that would be a kind of canvas for our KML object. How we do that? We start with the ActionScript file and class clause that extends UIComponent class. Then we need to put the body, which will be in our case a Sprite object and Rectangle object. The Sprite Object as I mentioned in the technology chapter, is a substitute for MovieClip in new version of AS. The Sprite object is considered as a basic display list building block. It can be used as a Display Object or as a DisplayObjectContainer and posses children. As you might remembered we will also use this object because we want to create a Scene3D object from PV3D engine in order to keep all the objects inside. And Scene3D object uses Sprite object in its constructor. So this sprite will be exactly a display area for the three dimensional world. Another object, the rectangle object we will use to build a bounding box for view. We will create the rectangle depending on the specific container height and width values, so that we can organize a better layout structure of the application. The rectangle will be then sent to be the scrollRect object of IUIComponent, which is an interface of UIComponent and will automatically create a bounding box for us. If we have those two variables declared we then create a constructor (don’t forget to invoke super method inside constructor first).

In our case we will need to override just two of the functions from the UIComponent package. The first one is createChildren() method. As advised by Adobe we have to check in the beginning if the children were already created. If not then we create them, set their properties and finally use addChild() method. The reason of checking if children were created is for future purposes (further extensions of class). In our case we just create a new Sprite object and add it to the component. The second one is of course the updateDisplayList() which is responsible for actually drawing the whole structure (more detailed description for both methods can be found above).

We also set the Rectangle object boundaries with the parameters that were given to the method. These parameters we would set in mxml file while creating the whole application (width and height attributes in a tag). After that we can set the rectangle as a scrollRect object from UIComponent interface. We do that by simply putting a line:


scrollRect = clipRect


We created the UIComponent that we can use as a canvas and fetch it to PV3D engine. The next thing is a main mxml file that will hold the Application tag and all other components, including the one that we just designed. So we start with Application tag, we put a normal mxml namespace as attribute. We then add our namespace, where we hold our component class. Then we put a Panel Component to keep the layout (inside the Application tag of course) and as his child we add our component which will be Canvas3D class.


If we have an outline, we are ready to dig the details. First of all we will need id attributes for Panel and Canvas3D object. So we name them as “mainPanel” and “mainCanvas” respectively. The exact thing what we want to do is after the mxml file is laid out is, we want to pass the Canvas3D object to PV3D engine. Let us consider PV3D engine as an object for now (an AS class). How we do that? We use the applicationComplete event that is dispatched after the whole mxml file is laid out (last event that occurs) to invoke a PV3D engine object with our canvas as a parameter. To catch the object we put a new attribute into Application tag “applicationComplete” which will point at the method that we want to invoke after the event is dispatched. Just after the opening Application tag we put an ActionScript section which will have body of the method that we want to invoke. The AS section has to be wrapped with specific tags which I will show you in a second (so that they are not confused with mxml coding by compiler). The body of the function would have to create a new kml object from given URL (later we might use a text box to pass the URL) and a new PV3D engine object to which we pass our canvas object and created kml object (we pass them to constructor).


In the function we passed the canvas and kml object to the PV3D object that will take over the drawing logic of the whole world from those two ingredients. So now the thing that we need is that PV3D engine class that will bind the given kml data with the PV3D engine and draw things on the given canvas that will be visible later in swf file.

Let’s create a new class. In constructor we save the references to canvas and kml object that were passed. We also initialize the stage properties. To achieve the stage object (a main container of the whole application – like stage in old flash) we can use the static method of the Application class or just the stage property of canvas that we were given. A flash application has only one stage object. Every DisplayObject (object that can be put into display list) has a stage property which refers to the same stage object. We can set the world graphics quality on stage object using its quality property and static values LOW, MEDIUM, HIGH from StageQuality class. The next thing to remember is that we will need the stage listeners to handle events like keyboard hits for instance. So as for now we set three listeners which will be as follows. I explain in detail how they work later.


· onEnterFrame - would be a method responsible for updating the view. The event that it is going to catch this listener is ENTER_FRAME event which will be dispatched with speed equal to one that we set as a framerate for the whole application (in other words can be considered as frame render).

· keyDownHandler - would catch the KEY_DOWN event. Used for navigation with keyboard.

· keyUpHandler would catch the KEY_UP event. Same as above.


We add the listeners to the stage object in a following manner:


stageObject.addEventListener(‘event name’, ‘method to handle event’);


We are ready now to set the stage with some objects. First of all we create the Scene3D object which would be a main container for the world. As mentioned before we use Sprite object from our Canvas3D component. That Sprite we fetched to the constructor of the class that we are developing at the moment. We use it as one and only parameter in Scene3D constructor. After that we will have a DisplayObjectContainer3D (Scene3D class) based on given DisplayObjectContainer (Sprite class) from Flex Application.

Then we create a camera object. We use the FreeCamera3D object from the PV3D engine. The camera object takes several parameters. We will focus just on two of them. This is quite important issue because it affects the way how the objects would be visible, the clipping planes, etc. We will set the first parameter to 6 which is the zoom value. The second one would be the front clipping plane, in other words how close we can get to object before it vanishes. We set its value to 100. If we set the values like that we need to place the camera really far away which will be 5000 units away in this case. The values I gave here are result of my experiments, which give considerably good viewing results. Free feel to tweak them anyway you want.

We will need also a main root object which will hold all the rest of objects so that we would be able to rotate the whole world around its pivot for instance. We add the rootNode which will be a DisplayObject3D object from PV3D to our Scene3D object which is DisplayObjectContainer3D. If you look at the inheritance tree of AS3 you will find a very similar dependency of DisplayObject and DisplayObjectContainer (pure Flex) with DisplayObject3D and DisplayObjectContainer3D (PV3D engine). Understanding the basic structure of classes for setting a 3D world in Flex is quite important and easy, so let me quickly review again the dependencies. Creating the 3D world in Flex would be setting an object based on a DisplayObjectContainer (Sprite for instance), then creating a DisplayObjectContainer3D based on the latter object (viewing 3D is after all a sequence of 2D images). After that we add DisplayObject3D objects to our DisplayObjectContainer3D if it is connected with 3d graphics, and additional DisplayObject objects to DisplayObjectContainer if those are panels, text inputs or buttons. That way we can project a quite nice layout with component for viewing 3D graphics in it.

Now we add the rootNode (which is an empty object) just like any other child using addChild method to scene3D. After that all new children that we add, we add to rootNode. With the constructor we were given also a kml object which was filled with data from a file specified (statically for now) in main mxml file. So of course we have access to the all the data that is available from within kml object. For us, as we are ready to add any data that is suitable for PV3D engine, we will use MESH_ARRAY objects available by simply accessing the MESH_ARRAY array from kml object. So how we do that? We project a simple loop that would be going through all the elements of MESH_ARRAY and for each child we get the reference to mesh3D object, then translate it with the translation vector calculated before by Kml class and available from MESH_TRANSLATION array. We translate meshes using it’s translate method among each axis with given distance (we don’t need to translate along the z axis, (0,0,1) case). The algorithm should be something like that.


loop (i=0; i <>

ourMesh = kml.MESH_ARRAY[i]

distances =kml.MESH_TRANSLATION[i]

ourMesh.translate (distances.x , new Number3D (1,0,0))

ourMesh.translate (distances.y , new Number3D (0,1,0))

rootNode.addChild(ourMesh)


After that we have added all the meshes that were created within kml object to rootNode object. The next step is finally rendering the screen and actually seeing something. To do that we have to execute the renderCamera method from the Scene3D object. It takes a camera object as a parameter, which we have already created and added. This method however will have to be invoked not in initialization process but exactly inside the method that is responsible for updating the view which is in our case connected with the ENTER_FRAME event and method onEnterFrame. So that renderCamera method is invoked at framerate speed that we’ve set. So let there be light, we can see some objects! In next step we will set up a bird-eye camera and get 3d models inside...



By Marcin Czech

Tuesday, May 15, 2007

converting KML to 3D, part 2

As you remember we access the longitude and latitude values using a recursion loop. The third value in spliced string found in coordinates tag is altitude which is counted in meters and doesn’t need to be calculated in any specific way. For extruded polygon we will find latitude, longitude and altitude in KML file. The thing worth noticing here is fact that the coordinates tag doesn’t possess basis vertices of the polygon (longitude, latitude, 0) so we have to make a copy of them so that they stick to the ground somehow. For storing the polygons we will use dynamic array which will store arrays of vertices, that way we will have easy access to every single vertex (which can be useful in future for UV mapping for instance). Ok, to revise, at the moment we have a POLYGONS array which has a number of elements equal to coordinates tag found in KML file. Each POLYGONS element is array of vertices that build up the polygon found.
As you might notice we do all the measurements from local pivot (first vertex of polygon is its local pivot). If we find another coordinates tag, we do all the calculations again from 0,0 point. So if we put into KML dozen of polygons we will end up with polygons but placed one on another. To avoid that, we need to set a WORLD PIVOT and store the POLYGON vector translations from it. WORLD PIVOT will be the first local pivot ever found (which is first vertex ever found). The translations will be stored in POLYGON TRANSLATIONS array and their indices will be corresponding to the ones found in POLYGONS array. Calculations for these translation vectors are easy. We store the latitude, longitude pair of WORLD PIVOT and if the algorithm finds another polygon (coordinates tag), it will subtract its local pivot (first vertex) from WORLD PIVOT and store it in POLYGONS TRANSLATIONS array as a Number3D object. That way we store the data in much more customizable manner (model and its world position are independent objects).
The next thing is giving a volume for polygons (they are flat at the moment). We search the POLYGONS arrays and check if vertices have z value greater than 0. If so, it means that they are extruded. If not we leave them like that (can be used for streets for instance). The whole process for extruded polygons would be copying the vertices which changed “z” value to 0 and adding it to that POLYGON item. In other words, the extruded polygons vertices arrays length should be doubled and filled with modified copies of existing vertices. Now if we done that are ready to create faces for 3D meshes? Not yet, we have to scale the all vertices so that they will be more usable later. The default WORLD SCALE for x,y is 100,000,000 and it is different from the WORLD SCALE for z which is 10. This is of course because they are calculated differently: x,y are calculated from latitude and longitude while z is just the altitude parameter set in Google Earth. We scale by multiplying all vertex values by WORLD SCALE parameter that can be found inside KML class.
3D Face is a triangle made of three vertices. Faces comprise for a Mesh structure (triangulated 3D model) and actually make it visible on screen. By default only one side of face is visible, that gives better performance. If you look at Papervision3D structure you will see that to create mesh we need two things. First of them is a vertices package that will be used in that 3D model, in other words an array of vertices. The second one is the array of faces for that mesh. We already have the first ingredient, which are the items of POLYGONS array. The second one needs to be developed. Then we have to project a loop that will be going through all polygons found in POLYGONS array and create mesh objects for each of them and store them into MESH array.
To achieve faces array from vertices array we need to loop again, through all vertices this time, group them in thirds and push new face on the fly to temporary faces array, then use both arrays (vertices and faces) and create new mesh. The problem here is that we cannot take vertices just like that to create faces. It is connected with polygon triangulation algorithms which include advanced math. Let’s revise quickly our situation here. First of all we don’t need to implement advanced 3D triangulation algorithms like Delaunay triangulation algorithm because we are dealing here with 2D polygons or objects which always will be extruded polygons. The polygon triangulation algorithm would be really nice here, but let us limit at the moment to convex polygons, and I tell you why. Primarily we can write a simple triangulation algorithm without involving advanced calculations. Secondly, keep in mind that PV3D supports Collada models which we would like to use as well later for more complex models, so I suggest spending time somewhere else with coding and write just a triangulation for convex polygons. The native Collada class in PV3D for instance doesn’t need the triangulation algorithm because the models are already triangulated when exported from 3D modeling application. The vertices that we have in POLYGON array are sorted in a poly line (same sequence that we put in Google Earth). Taking that fact under consideration, the algorithm goes as follows:

1. Set index to 1.
2. Take the first vertex of polygon and mark it as main (vertices [0]).
3. Take vertices from array, “vertices [index]” and “vertices [index +1]”.
4. Create a 3D Face object from these three vertices.
5. Push the face to temporary array.
6. If vertices [index +1] is equal to main vertex, break the loop.
7. If not increment index and go to step 2.
8. Create a Mesh 3D object using vertices array and temporary faces array.
9. Store the Mesh in MESH array.
10. Take another polygon (vertices array) if found and go to step 1.

That way we have triangulated the roof of the cubic model into Mesh 3D model, if calculated polygon has some height or simple polygon if not. One thing more that we have to do here is to create faces for side walls of our models in a simple loop using copied vertices. Another thing to remember is to fetch vertices in clockwise order while creating faces, in order to be able to see them later (if you still can’t see them, try fetching them in other direction). After all these steps we should end up with MESH array and POLYGONS TRANSLATIONS array, both available from within KML class, ready to connect with Flex application and PaperVision 3D. Here are the most important parts of KML class body (more detailed information can be found in KML class Documentation provided).
Properties:
• KML_FILE : XML - *.kml files are nothing more than a specific XML files. This structure holds its structure inside KML class so that we can process it using E4X.
• POLYGONS : Array – each of its elements is another array consisting of vertex3D objects, parsed from KML_FILE structure and coordinates tag.
• MESH_TRANSLAITON : Array - preserves the translation of each mesh, calculated using local pivot and world pivot. Its order (array index) is equal to index in POLYGONS array.
• WORLD_PIVOT : Number3D – First vertex ever parsed is considered as the 0,0,0 point in world created. All the mesh translation values are calculated from that main point. Its values are latitude, longitude of first vertex and z = 0. There is also a Vertex3D version available.
• MESH_ARRAY : Array – Final array of Mesh3D objects created from POLYGONS array and temporary faces array.

Methods:

Kml(filename:String) – class constructor. Sets the KML_FILE object and starts the whole process of world creation.

calculateTranslation(world:Vertex3D, local:Vertex3D):Vertex3D – calculates the distance between given local pivot and world pivot. Returns a Vertex3D object as a result. The results are stored in MESH_TRANSLATION array.

gnomonicProjection(lambda:Number, fi:Number, z:Number, localPivot:Number3D, scaleXY:Number, scaleZ:Number):Vertex3D – method calculates the given longitude (lambda), latitude (fi) into a Cartesian x,y coordinates using the algorithm and equations described earlier. Additionally it scales the results automatically (put “1” for orginal values). Returns a Vertex3D object that can be used straight in 3D applications.

triangulatePolygonVertices(vertices:Array):Array – triangulates given array of vertices using gnomonic projection algorithm described before. Returns a faces array based on those vertices so that they can be used to create a Mesh. Used in createMeshes method.

createMeshes():void – takes the POLYGONS array and using latter triangulation method fills the MESH array with new objects, ready to use in PV3D.

Now that we have a class to represent the data, we can actually start creating an application which will be the home for our KML pet. Let’s go then.