Thursday, December 30, 2010

30 December 2010

Finally, the pan gesture for translation is completed! The translation for all objects works very well now.

With regards to the problem that I mentioned yesterday, I actually found the cause to it. It has got to do with the object's scale in the scene! Objects with a smaller scale translate lesser while objects with a larger scale translate more, even though with the same translation value. I came to realize that when I went back to take a look at the scenes that I have created. The reason that the cylinders translated so much was because at the point of importing, the cylinders were very small in comparison to the heart model. Hence, I remembered that I actually scaled the cylinders up a lot.

Therefore, we would have to factor in the objects' scale in the scenes as well. Worked that out fairly quickly, and we are done! Hence, for translation, here are the few things that we have taken into consideration for the translation algorithm:
- Camera's Z position
- Object's Z position that is set by us when drawing (glTranslate)
- Object's default scene Z position
- Object's default scene scale

Some improvements have been made to how we pick the objects for translation as well. Previously, every time when the state of the pan gesture is being changed, we would do a check to see which object the user is touching on. Now, we have it changed to such that when the pan gesture has began, we would store the index of the object that the user has selected. This way, we have prevented some issues with collisions.

It is time to start work on labeling next week~

Wednesday, December 29, 2010

29 December 2010

Managed to get quite a few problems solved for the translation today.

I got the camera position factored into our translation algorithm, and scenes that only have objects drawn at the near plane can be translated properly. If there were to be multiple objects in the scene with different z-axes, the translation would not appear well.

What needed to be done was to factor in the translation vector of the different nodes for their individual translations, in particular, the z position. This can be easily obtained through a method that can be found in the library that we are using.

Hence, in order for the translation to work, there are 3 things that we have to take into consideration:
- Camera's z position
- Node's default scene z position
- Node's z position that is being set by us when drawing

Once that was done, I then moved on to look into another issue with the translation.

We have a scene where there is a heart model, and 4 identical cylinders. Somehow, the translation for the heart works very nicely. As for the cylinders, the translation value seemed to be too large, and the objects fly all over the view. We have another scene where there is only one cylinder in it, and it translates properly.

I am suspecting that this has got to do with the way the scene is being created. Tried working with 3DS Max to change the order in which the nodes are being drawn, and also to recreate the camera, but both did not seem to have any effect at all.

Got to do some research on this tomorrow to get an idea as to what is going on.

Tuesday, December 28, 2010

28 December 2010

Spent the day looking for the answer as to why the translations are not working as expected.

After some reading up on the OpenGL red book, I got to understand the viewing matrix better, and what the LookAtRH (by reading up on gluLookAt) method does. I realized that we would have to factor in the position of our camera this time. Reason being that the LookAtRH method actually moves our camera to another position, not at (0, 0, 0) anymore.

Monday, December 27, 2010

27 December 2010

Kevin's back today. Zac and I updated him on what we have done while he was away, and clarified some things along the way. Turns out that it would be tough for us to put in the dicom images right now at this stage. Hence, we were to put that on hold till our application is almost near the end.

Went back to 3DS Max today to create some 3D scenes for us to use in testing. Got that part up pretty quickly. All I had to do was to recall the necessary steps required to create the scene and export them into POD files. I was supposed to work on implementing labels to label the different nodes in the models for our application. However, I found out that the translation was not working for the iPad.

Hence, I went back to fix up the pan gesture for translation. Things were working well for the iPhone application, but on the iPad, the translation was a little off. Spent some more time studying the sample application, and I realized that there was this method to do the conversion view points to the eaglLayer points. The iPad view's size was larger than that of the eaglLayer that we created, hence, I figured that this would solve the problem.

- (CGPoint)convertPointFromViewToSurface:(CGPoint)point
{
CGRect rect = [self bounds];
return CGPointMake((point.x - rect.origin.x) / rect.size.width * _size.width, (point.y - rect.origin.y) / rect.size.height * _size.height);
}

- (CGRect)convertRectFromViewToSurface:(CGRect)rect
{
CGRect bounds = [self bounds];
return CGRectMake((rect.origin.x - bounds.origin.x) / bounds.size.width * _size.width, (rect.origin.y - bounds.origin.y) / bounds.size.height * _size.height, rect.size.width / bounds.size.width * _size.width, rect.size.height / bounds.size.height * _size.height);
}

Got this implemented and the translation became more accurate. There was no need for conversion in the iPhone application as the view's size is the same as the eaglLayer's size. It still was not perfect though. The translation for the heart models worked well, but not for the cylinders. I really wonder why...