Thursday, December 16, 2010

16 December 2010

Finally managed to fish out the formula that the library uses to calculate the perspective view. In the end, it's the same formula that we used previously. At least now I got my problems narrowed down. Seems like the way we created the perspective view now and the way we did it previously probably has no difference.

Looks like it's the camera that we have set up that is causing the problem. This setting up of the camera is necessary such that we would be able to see the object being loaded without doing an extra translation into the z-axis. Even if the extra translation is done, the object is still not being displayed correctly. Furthermore, without the camera set up, all the objects' animations are not displayed.

Zac and I had a small discussion at around 2p.m. We decided to pause the explode and the pan to translate gesture first. Both of us felt that these 2 functionalities would definitely take quite some time. Hence, we moved on to work on the plane cutting functionality, and the reading of the dicom location information.

I first started out looking for conversion tools that are available to do a dicom to png conversion. Found quite a few, and they only worked on Windows. I tried out a few of them, but none of them worked at all. Some did not even do a conversion, and some conversions gave me png images that looked totally different from the original dicom images.

Wednesday, December 15, 2010

15 December 2010

Decided to figure out the formula in which the library uses to create the perspective view today. Spent the day studying the methods in the library and how they work. Did some search on the web as well to check on the various ways people create perspective views. Though I did not managed to have any breakthroughs for the pan to translate gesture today, I managed to resolve some issues in the application along the way. It has got to do with the color picking method that we used. We missed out some parts of drawing which caused the display to go haywire at times.

Along the way, Zac and I also had several discussions. We talked about how to go about improving the rotation for both the object as a whole, as well as the individual components. We found the current swipe to rotate being too static, in which every swipe only causes the object to rotate by 45 degrees. Hence, we decided to make use of the pan gesture to do the rotation. Once we got the idea laid out nicely, Zac took over this while I continued to work on the pan to translate gesture.

Tuesday, December 14, 2010

14 December 2010

Spent the whole day working on the pan to translate gesture. I thought it was going to be fairly simple at first, since we already got the algorithm up in our previous application where we worked with .obj files.

That obviously was not the case. Somehow, the same algorithm does not translate the object we have in the current application properly. From the looks of it, it seems that the final translation value was not enough to produce the same effect that we see in our previous application.

The only difference that I see in both applications is that in the current one, the perspective of the projection matrix is being calculated through the c++ library, and we have the camera set up. In the previous one, we made use of the glFrustumf to create our perspective view.

I spent the whole day trying to work things out, but to no avail. It seems that the formula that we used does not work in our current application at all. Got to spend some more time trying to understand how this whole thing works.

Monday, December 13, 2010

13 December 2010

Started the day by implementing the ModelTransformation class and tweaking our application to cater for that.

At first, both of us expected lots of work to be done in order to get the ModelTransformation class working. However, we got that up fairly quickly before lunch, including the tweaking of the swipe to rotate gesture. I think the fact that we were able to make such changes in such a short time really reflects our understanding of our application.

After lunch, we decided to split the work among ourselves. I worked on the double tap to scale gesture while Zac researched on how to go about implementing the explode functionality. I managed to complete my part at the end of the day. The UIGestureRecognizer is really useful for our application. Managed to get the effect of double tapping a photo in the iPhone's photo application.