Thursday, December 30, 2010

30 December 2010

Finally, the pan gesture for translation is completed! The translation for all objects works very well now.

With regards to the problem that I mentioned yesterday, I actually found the cause to it. It has got to do with the object's scale in the scene! Objects with a smaller scale translate lesser while objects with a larger scale translate more, even though with the same translation value. I came to realize that when I went back to take a look at the scenes that I have created. The reason that the cylinders translated so much was because at the point of importing, the cylinders were very small in comparison to the heart model. Hence, I remembered that I actually scaled the cylinders up a lot.

Therefore, we would have to factor in the objects' scale in the scenes as well. Worked that out fairly quickly, and we are done! Hence, for translation, here are the few things that we have taken into consideration for the translation algorithm:
- Camera's Z position
- Object's Z position that is set by us when drawing (glTranslate)
- Object's default scene Z position
- Object's default scene scale

Some improvements have been made to how we pick the objects for translation as well. Previously, every time when the state of the pan gesture is being changed, we would do a check to see which object the user is touching on. Now, we have it changed to such that when the pan gesture has began, we would store the index of the object that the user has selected. This way, we have prevented some issues with collisions.

It is time to start work on labeling next week~

Wednesday, December 29, 2010

29 December 2010

Managed to get quite a few problems solved for the translation today.

I got the camera position factored into our translation algorithm, and scenes that only have objects drawn at the near plane can be translated properly. If there were to be multiple objects in the scene with different z-axes, the translation would not appear well.

What needed to be done was to factor in the translation vector of the different nodes for their individual translations, in particular, the z position. This can be easily obtained through a method that can be found in the library that we are using.

Hence, in order for the translation to work, there are 3 things that we have to take into consideration:
- Camera's z position
- Node's default scene z position
- Node's z position that is being set by us when drawing

Once that was done, I then moved on to look into another issue with the translation.

We have a scene where there is a heart model, and 4 identical cylinders. Somehow, the translation for the heart works very nicely. As for the cylinders, the translation value seemed to be too large, and the objects fly all over the view. We have another scene where there is only one cylinder in it, and it translates properly.

I am suspecting that this has got to do with the way the scene is being created. Tried working with 3DS Max to change the order in which the nodes are being drawn, and also to recreate the camera, but both did not seem to have any effect at all.

Got to do some research on this tomorrow to get an idea as to what is going on.

Tuesday, December 28, 2010

28 December 2010

Spent the day looking for the answer as to why the translations are not working as expected.

After some reading up on the OpenGL red book, I got to understand the viewing matrix better, and what the LookAtRH (by reading up on gluLookAt) method does. I realized that we would have to factor in the position of our camera this time. Reason being that the LookAtRH method actually moves our camera to another position, not at (0, 0, 0) anymore.

Monday, December 27, 2010

27 December 2010

Kevin's back today. Zac and I updated him on what we have done while he was away, and clarified some things along the way. Turns out that it would be tough for us to put in the dicom images right now at this stage. Hence, we were to put that on hold till our application is almost near the end.

Went back to 3DS Max today to create some 3D scenes for us to use in testing. Got that part up pretty quickly. All I had to do was to recall the necessary steps required to create the scene and export them into POD files. I was supposed to work on implementing labels to label the different nodes in the models for our application. However, I found out that the translation was not working for the iPad.

Hence, I went back to fix up the pan gesture for translation. Things were working well for the iPhone application, but on the iPad, the translation was a little off. Spent some more time studying the sample application, and I realized that there was this method to do the conversion view points to the eaglLayer points. The iPad view's size was larger than that of the eaglLayer that we created, hence, I figured that this would solve the problem.

- (CGPoint)convertPointFromViewToSurface:(CGPoint)point
{
CGRect rect = [self bounds];
return CGPointMake((point.x - rect.origin.x) / rect.size.width * _size.width, (point.y - rect.origin.y) / rect.size.height * _size.height);
}

- (CGRect)convertRectFromViewToSurface:(CGRect)rect
{
CGRect bounds = [self bounds];
return CGRectMake((rect.origin.x - bounds.origin.x) / bounds.size.width * _size.width, (rect.origin.y - bounds.origin.y) / bounds.size.height * _size.height, rect.size.width / bounds.size.width * _size.width, rect.size.height / bounds.size.height * _size.height);
}

Got this implemented and the translation became more accurate. There was no need for conversion in the iPhone application as the view's size is the same as the eaglLayer's size. It still was not perfect though. The translation for the heart models worked well, but not for the cylinders. I really wonder why...

Friday, December 24, 2010

24 December 2010

It's Christmas eve! Received an email few days ago from HR stating that today's a half-day.

Decided to look into the gluLookAt method in OpenGL. Though this method is not available in OpenGL ES, I figured that it would help me in one way or another in terms of figuring out the issues causing the translation to not work properly. There is a method in the c++ library that we are using to create a viewing matrix (the camera in other words) - LookAtRH. I presume that it works in a similar way as the gluLookAt method.

Though I got to read through some sources, I am still kind of puzzled. Gonna have to spend some more time next week to do this.

Thursday, December 23, 2010

23 December 2010

Resumed work on getting the pan gesture for translation up. Tried out another method of translation after studying a similar application. This method does not apply the calculation of the frustum's length at a particular z-axis, which at first made me wonder why it worked so well on that application.

- (void)panGestureCaptured:(UIGestureRecognizer *)gesture
{
...
...
...

static CGPoint currentLocation;
static CGPoint previousLocation;
ModelTransformation *obj = [[ModelTransformation alloc] init];
int objIndex = [eaglView retrieveObjectWithUniqueColors:gesture];
Vertex3D tempCurrentPosition = [eaglView getNodePositionWithIndex:objIndex];
if(gesture.state == UIGestureRecognizerStateBegan)
{
currentLocation = [gesture locationInView:[gesture view]];
currentLocation = [self convertToGL:currentLocation];
}
else if (gesture.state == UIGestureRecognizerStateChanged)
{
previousLocation = CGPointMake(currentLocation.x, currentLocation.y);
currentLocation = [gesture locationInView:[gesture view]];
currentLocation = [self convertToGL:currentLocation];
CGPoint diff = CGPointMake(currentLocation.x - previousLocation.x, currentLocation.y - previousLocation.y);
obj.xTranslation = tempCurrentPosition.x + diff.x;
obj.yTranslation = tempCurrentPosition.y + diff.y;
obj.ifTranslate = YES;
obj.node = objIndex;
[eaglView setTransformation:obj];
[obj release];
}
}

- (CGPoint)convertToGL:(CGPoint)uiPoint
{
float newY = eaglView.frame.size.height - uiPoint.y;
float newX = eaglView.frame.size.width - uiPoint.x;
CGPoint ret;
switch ([UIDevice currentDevice].orientation) {
case UIDeviceOrientationPortrait:
ret = CGPointMake(uiPoint.x, newY);
break;
case UIDeviceOrientationPortraitUpsideDown:
ret = CGPointMake(newX, uiPoint.y);
break;
case UIDeviceOrientationLandscapeLeft:
ret.x = uiPoint.y;
ret.y = uiPoint.x;
break;
case UIDeviceOrientationLandscapeRight:
ret.x = newY;
ret.y = newX;
break;
}
return ret;
}

This seems to work really nicely with some of the models that we have. Some of them just do not seem to work well at all.

Wednesday, December 22, 2010

22 December 2010

Seems like I have missed out something critical too. There is also this Patient Position (0018, 5100) tag that we have to factor in as well. The axes of the patient based coordinate system varies at different patient positions. The following images should illustrate this well enough:

Fig. 1 - Head First-Supine (HFS)

Fig. 2 - Feet First-Supine (FFS)

Fig. 3 - Head First-Prone (HFP)

Fig. 4 - Feet First-Prone (FFP)


Fig. 5 - Imaging Equipment

The figures above can be found from here as well, and there are also some comprehensive explanations:
http://xmedcon.sourceforge.net/Docs/OrientationXMedConPatientBased


When facing the front of the imaging equipment (refer to Fig. 5),
- Head First is defined as the patient’s head being positioned toward the front of the imaging equipment.
- Feet First is defined as the patient’s feet being positioned toward the front of the imaging equipment.
- Prone is defined as the patient’s face being positioned in a downward (gravity) direction.
- Supine is defined as the patient’s face being in an upward direction.
- Decubitus Right is defined as the patient’s right side being in a downward direction.
- Decubitus Left is defined as the patient’s left side being in a downward direction.

Here are the following terms that can be found in the Patient Position (0018, 5100) tag:
- HFP (Head First-Prone)
- HFS (Head First-Supine)
- HFDR (Head First-Decubitus Right)
- HFDL (Head First-Decubitus Left)
- FFP (Feet First-Prone)
- FFS (Feet First-Supine)
- FFDR (Feet First-Decubitus Right)
- FFDL (Feet First-Decubitus Left)


After a short discussion with Zac yesterday, I decided to look a little more into the patient-based coordinate system, in particular, to find out where the origin is. Since the position and orientation of the dicom images are provided to us with respect to the patient-based coordinate system, we would have to know where the origin is in order to map it into our models in our application. Looked around for quite some time but I cannot seem to find any sources that talks about the origin of the patient-based coordinate system.

Zac and I then had a short discussion again, and somehow we realized that we do not have a need to know the origin of the patient-based coordinate system at all. We were only loading the specific object models, not the whole human body. Hence, we would only need to know where the images should face, and at which orientation using the following 3 tags in the dicom images:
- Patient Position
- Image Position
- Image Orientation

The newly renamed images now look like this:
IM-0001-0001_x-74.798828125y-268.798828125z-82.5r100c010pFFS.png

In the case above,
x = -74.798828125
y = -268.798828125
z = -82.5
r (row vector) = (1, 0, 0)
c (column vector) = (0, 1, 0)
p (Patient Position) = FFS (Feet First-Supine)

Finally, it is time for me to move on back to work on the pan gesture to translate feature.

Tuesday, December 21, 2010

21 December 2010

Time really flies. We would be ending our internship in just 9 more weeks, including this week. Gonna miss the people and this place a lot.

Read the PDF document which I chanced upon yesterday. The tag (0020, 0032) Image Position (Patient) specifies the x, y, and z coordinates of the upper left hand corner of the image. This is also the center of the first voxel (a 3-dimensional equivalent of a pixel) that is being transmitted. This position is relative to the patient-based coordinate system:

- The x-axis is increasing to the left hand side of the patient.
- The y-axis is increasing to the posterior side of the patient.
- The z-axis is increasing toward the head of the patient.

This link further illustrates the patient-based coordinate system. It has several pictures there too:
http://www.itk.org/Wiki/Proposals:Orientation

In each image frame, the Image Position (Patient) (0020,0032) specifies the origin of the image with respect to the patient-based coordinate system.

Managed to rename a set of 76 png images with the respective image positions, but something seems to be wrong. Even with the image position, I would need to know which direction the image should be facing, and at which orientation. This definitely has got something to do with the Image Orientation (Patient) (0020,0037) tag.

Read a number of sources on the web, and it is mentioned that Image Orientation tag specifies the direction cosines of the first row and the first column with respect to the patient. Did some searching on the web, but I still do not understand what that sentence is supposed to mean.

Towards the end of the day, I managed to find a link. From there, I got to understand the Image Orientation tag. This is the tag that identifies the direction in which the image should be facing as well as it's orientation. Here's the link:
http://www.medicalconnections.co.uk/wiki/Image_Orientation_Attributes

Now, I only have the png images named with the x, y and z coordinates. In addition to that, I would have to find a way to put in the direction cosines as well, if not, we would not be able to identify where the image should face and at which orientation. Got the renaming done at the end of the day. Now, the file names look like this:

IM-0001-0001_x-74.798828125y-268.798828125z-82.5r100c010.png

In the case above,
x = -74.798828125
y = -268.798828125
z = -82.5
row vector = (1, 0, 0)
column vector = (0, 1, 0)

Monday, December 20, 2010

20 December 2010

The office seems so quiet today, as if no one's working today. Weird. Oh well, holiday season?

Started the day by continuing the search for image converters. While doing that, I realized that on Windows, the .dcm images that I have cannot be viewed properly. I downloaded some image viewers, and all of them show a black patch at the top following by a white patch at the bottom. This explains why the converters are "not work properly" when I tried them out on Thursday last week.

Decided to do the manual conversion myself. Fortunately, there is this feature in OsiriX that allows exporting of the dicom images to jpeg format. From there, I then learnt how to make use of the Automator tool in OSX to do the conversion from jpeg to png. Learnt it here:
http://www.devdaily.com/mac-os-x/batch-convert-bmp-to-jpg-png-tiff-image-files-free

Got this done pretty quickly. The tough part comes the renaming of the images. I have to fish out the position information and put it into the name of the image. But before that, I would have to find out which of the dicom's metadata refers to the images position information. I am suspecting that it is under the tag (0020, 0032) Image Position (Patient). I am still not very sure though. Found this PDF document on the web:
http://medical.nema.org/dicom/2004/04_03PU.PDF

Got to read through that to find out more, and hopefully it helps. In that document, it is said that there is also another field, called the Image Orientation (Patient) (0020, 0037). It is mentioned that this field should be provided as a pair with the Image Position (0020, 0032) field. Getting a little confused here.

Thursday, December 16, 2010

16 December 2010

Finally managed to fish out the formula that the library uses to calculate the perspective view. In the end, it's the same formula that we used previously. At least now I got my problems narrowed down. Seems like the way we created the perspective view now and the way we did it previously probably has no difference.

Looks like it's the camera that we have set up that is causing the problem. This setting up of the camera is necessary such that we would be able to see the object being loaded without doing an extra translation into the z-axis. Even if the extra translation is done, the object is still not being displayed correctly. Furthermore, without the camera set up, all the objects' animations are not displayed.

Zac and I had a small discussion at around 2p.m. We decided to pause the explode and the pan to translate gesture first. Both of us felt that these 2 functionalities would definitely take quite some time. Hence, we moved on to work on the plane cutting functionality, and the reading of the dicom location information.

I first started out looking for conversion tools that are available to do a dicom to png conversion. Found quite a few, and they only worked on Windows. I tried out a few of them, but none of them worked at all. Some did not even do a conversion, and some conversions gave me png images that looked totally different from the original dicom images.

Wednesday, December 15, 2010

15 December 2010

Decided to figure out the formula in which the library uses to create the perspective view today. Spent the day studying the methods in the library and how they work. Did some search on the web as well to check on the various ways people create perspective views. Though I did not managed to have any breakthroughs for the pan to translate gesture today, I managed to resolve some issues in the application along the way. It has got to do with the color picking method that we used. We missed out some parts of drawing which caused the display to go haywire at times.

Along the way, Zac and I also had several discussions. We talked about how to go about improving the rotation for both the object as a whole, as well as the individual components. We found the current swipe to rotate being too static, in which every swipe only causes the object to rotate by 45 degrees. Hence, we decided to make use of the pan gesture to do the rotation. Once we got the idea laid out nicely, Zac took over this while I continued to work on the pan to translate gesture.

Tuesday, December 14, 2010

14 December 2010

Spent the whole day working on the pan to translate gesture. I thought it was going to be fairly simple at first, since we already got the algorithm up in our previous application where we worked with .obj files.

That obviously was not the case. Somehow, the same algorithm does not translate the object we have in the current application properly. From the looks of it, it seems that the final translation value was not enough to produce the same effect that we see in our previous application.

The only difference that I see in both applications is that in the current one, the perspective of the projection matrix is being calculated through the c++ library, and we have the camera set up. In the previous one, we made use of the glFrustumf to create our perspective view.

I spent the whole day trying to work things out, but to no avail. It seems that the formula that we used does not work in our current application at all. Got to spend some more time trying to understand how this whole thing works.

Monday, December 13, 2010

13 December 2010

Started the day by implementing the ModelTransformation class and tweaking our application to cater for that.

At first, both of us expected lots of work to be done in order to get the ModelTransformation class working. However, we got that up fairly quickly before lunch, including the tweaking of the swipe to rotate gesture. I think the fact that we were able to make such changes in such a short time really reflects our understanding of our application.

After lunch, we decided to split the work among ourselves. I worked on the double tap to scale gesture while Zac researched on how to go about implementing the explode functionality. I managed to complete my part at the end of the day. The UIGestureRecognizer is really useful for our application. Managed to get the effect of double tapping a photo in the iPhone's photo application.

Saturday, December 11, 2010

06 December 2010 - 10 December 2010

Was sick for the whole week again, and this really affected the pace at which the project is advancing. I tried my best to make up for the lost time.

Managed to get several things working this week. Zac and I worked on the iPad layout together, and I managed to improve how the UIPopoverController animates. The interface for the iPad seems to be on the verge of completion, unless we have to add in new stuff in the future.

Then we started the porting over of the transformation gestures that we have done before. We first started with the swipe to rotate gesture, and got it done without much trouble. However, there was still this issue with the use of the c++ library. There were many warnings given from Xcode highlighting the various different conflicts. Despite these warnings, our project still ran well. But still, we decided to rework our application in such a way that only the EAGLview imports the c++ library.

After some discussions with Kevin, we decided to come up with a ModelTransformation class. This class would contain the different transformations that are being passed over when the gestures are being captured. Then, we would pass in the instance of this class into the EAGLview to perform the necessary transformations.

We decided to leave that to be done on the following week, and went on to set up the SVN on our machines. Having a SVN would definitely save us lots of time.

Sunday, December 5, 2010

29 November 2010 - 03 December 2010

Didn't have the chance to blog daily for this week. Been sick for this whole week, and even had to go on MC on the 30th (Tuesday).

I worked on creating the animations for the 3D models for the most of this week. First, I worked with Blender. Got the animations up fairly quickly, but they are just some simple mesh animations. However, the frames got lost when the .dae file is being converted to a .pod via the Collada2POD utility. Tried searching on the web to get help, but couldn't find any. I even resorted to posting on their forum, the PowerVR forums for help. Did not get any reply though, even now.

Hence, I decided to try exporting the model directly into a .pod file. But in order to do that, I would have to work with either Autodesk 3D Studio Max, or Autodesk Maya. Plug-ins are only provided for these 2 modeling tools for exporting to a .pod file. Decided to make use of 3D Studio Max since one of our colleagues have the license for that.

Since 3D Studio Max can only be run on either a Windows or a Linux platform, I had to work with Bernard. He is also the one that has the license for the software. Talked to him for a bit and he allowed me to use the Windows PC at his desk, since he would be going for a course for 2 days. It was really nice of him.

But after that, Kevin ran bootcamp on the MacBook Pro and there is Windows installed. So instead of going over to Bernard's desk, I decided to work on the MacBook Pro instead. I got the trial version of 3D Studio Max installed and played around with it. I feel like I am a graphics designer when working with these modeling tools! It’s a whole new thing to me, and I had to look for tutorials to learn the various different functionalities in the modeling tool. Hence, it is another valuable learning process for me.

I worked with Gim Han as well so as to get the heart mesh from him. He sent me 3 different models that have different vertex sets. I managed to get the vertex animations up fairly quickly. We were using a technique called morphing to come up with the animations.

The problem came just when we thought that the animations were complete. When I tried exporting the model into a .pod file, the animations were missing. Then it came back to me. POD files do not support vertex animations. We had a discussion with Kevin, and decided that we should move on to the other components of our project first. While working on the other components, we would have to read up on other techniques that we could use to come up with the animations.

Hence, on Friday, we watched the podcast lectures to find out more on the UI controls for the iPad, namely the UISplitViewController and the UIPopoverView. I played around with these controls, and we discussed on our actual UI layout for the iPad application. Kevin, Zac and I gave our suggestions as to how the interface should look. My experience gained from working on my Major Project, which is also a medical application, helped quite a bit, since I was the one working on the UI then. We managed to derive at one that would give the best user experience.

I managed to get the UISplitViewController working, but there is still a problem when the interface orientation changes to landscape on the iPad. Somehow, the EAGLView cannot be seen at all, even though it is still there. Then we realized that it has got something to do with the animations when we draw the view. If the animation is stopped before the rotation, we could see the view clearly. However, the moment we turn the animation back on, it disappears from our sight once again.

Spent quite some time debugging, but still could not solve the problem. We would have to work on this next week.

I am glad that in TP, we get mant chances to work with others for our assignments. With this experience, I was able to work better with my colleagues whenever I needed help from them.

Friday, November 26, 2010

26 November 2010

Got my hands on some utilities that we can use to convert .dae files to .pod files. It can be found here:
http://www.imgtec.com/powervr/insider/powervr-collada2pod.asp

Managed to get some sample .dae files to test out the conversion, and this website provides some good models:
http://www.collada.org/owl/

Once I got the conversion done, I got it tested out with PVRShaman. This utility allows me to import the .pod files that I have and view it. I can even do things like rotation, panning, etc. Changing display to wireframe can be done too. So with that, I'm actually able to see that the .dae file is converted to .pod nicely. Here's the utility:
http://www.imgtec.com/powervr/insider/powervr-pvrshaman.asp

Many 3D modeling tools out there are able to export the models into .dae files. Now the question is, am I able to use Blender to export the models into .pod files directly? If that can be done, we can easily skip the step for conversion. Did quite a lot of research, and realized that most people out there do the conversion using this method.

I then looked into the requirements for having animations in .pod files. Did some analysis on the .dae files that I have downloaded, and found out that the animations are actually inside the .pod file itself. After going through some forums, many users are saying that they create animations by capturing the frames. Once they got the animations up, they could just easily export the 3D model into a .dae file, which can then be converted to a .pod file. There, we would then have our animations.

Thursday, November 25, 2010

25 November 2010

Carried on with correcting the textures implementation today. Still could not get things working properly.

Kevin then came in and brought up his concern on the different file formats that we can use to load our 3D models. Hence, we went on to do some research to find out which file format is best supported by the hardware that's being used by the iPad, which is the POWERVR chipset.

After lunch, we discussed on what we have researched on and finally decided to work with .pod files (aka POWERVR object data). Zac was tasked to look into loading the .pod files, while I looked into the utilities that are available to obtain .pod files, be it converting or exporting directly from 3D modeling tools.

Had a short feedback session with Kevin before leaving the office, and I received some really valuable feedback from him. Gonna keep his feedback in mind and work actively to improve myself. It's very seldom that we meet people who are willing to go out of their way and give you constructive feedback and let you know about yourself, be it the positive or the negative side. I'm gonna have to cherish this opportunity to improve.

"Never be afraid to question"

Wednesday, November 24, 2010

24 November 2010

Realized that my textures wasn't working very well. Had to do something to the texture indices and model indices as they're different, and these affect how the model is being drawn.

Spent the day trying to understand the sample codes that we have. The codes are kinda complex, hopefully I can get it working soon.

Tuesday, November 23, 2010

23 November 2010

Zac and I started work on lighting and textures today. We decided to split to work and Zac was to work on lighting, and I was to work on textures. But before we started on our work, Zac did some explanation on how he solved the translation issue. It really is the problem that we suspected during our small discussion on the train home yesterday.

Read up this link and I got a fairly clear picture of what to do. I find it really useful as the author explains every step clearly, thus, giving me a clearer picture of what is going on. Check it out here:


Got the textures up at the end of the day. Zac got lighting up too. However, there's still much improvement to be made on our work. Gonna work on that tomorrow.

Monday, November 22, 2010

22 November 2010

Spent the entire day working on the pan gesture for translation. Zac and I had a hard time trying to cater for 3D translation. Thus, we went back to understand how the frustum is being created again.

Although we have understood what was going on, we still could not come up with a proper solution. The center of the object does not follow our touch at all.

Later in the day, Kevin came in and brought up his solution. We tried it out, but there were still some minor problems. Gonna work out those solutions with Zac tomorrow.

Friday, November 19, 2010

19 November 2010

Touched up on the scaling of the 3D models in the morning. Now the object is scaling nicely. Kevin then came in and we discussed on what to do next. Here's a picture of what we have discussed:


We would be following the schedule stated on the top right corner of the board. Left the layout and gesture for the last as we have to wait for the iOS 4.2 SDK to be released, so that we can work on the iPad.

Lunched at Clementi with Kevin, Calvin and Zac. Went there as Kevin wants to buy the iPhone 3D programming book. We ate and then went to get Koi bubble tea. Hehe. It was a really funny experience. Shall not mention the details here! :P

After lunch, we came back and I added in the part on 3D model translations. I already had it done up previously, just had to port it over to this project. With this, we have our version 2.0 up and running. Spent some time solving memory leaks here and there.

*edit* translation was not working well. Had to tweak it here and there. Spent some time trying to understand what happens when the model moves in the negative z-axis. Somehow got the formula up, but still have to try it out on Monday to see if it works.

Thursday, November 18, 2010

18 November 2010

Got quite a lot of things done today. Completed the port over for swipe gesture rotation to OpenGL ES 1.1 today. Not much changes had to be made, so it was fairly quick. Still, individual object interactions with multiple objects loaded was not catered for yet.

Moved on to make use of pinch gesture for scaling, and managed to get it up. Still got some tweaking to do tomorrow though. Got to watch the podcast lecture first.

With that done, Zac and I then moved on to research on how to interact with individual objects. We found 2 different approaches. One of which is to make use of radius, the other is to make use of colors. After weighing the pros and cons, we decided to go with colors first. Got things working out nicely at the end of the day :) we're able to do individual object rotations with multiple objects loaded onto the view.

Tuesday, November 16, 2010

16 November 2010

Completed the rotation of a single 3D model before lunch today. Did it by doing some matrix multiplications. I first created the matrix for rotation on the y-axis, then another for rotation on the x-axis. By multiplying these 2 matrices, I was able to get the final rotation matrix, which is used to transform the 3D model. However, there are still some faults here and there. I brought this up to Kevin and was told that it is fine for now. He mentioned that in order to solve this, I had to look into an even more complex concept, quarternions.

With that, version 1.0 was up. However, all 3 of us realized that OpenGL ES 2.0 is stumbling us. After some discussions, we decided to move to OpenGL ES 1.1. Hence, Zac and I are required to archive what we have done in OpenGL ES 2.0, and then modify those codes to fit in OpenGL ES 1.1.

Many applications out there simply load an EAGLView for the whole application, without incorporating controls such as UITableViews, UINavigationBars, etc. Currently, we are being tasked to look into the proper way of having additional controls together with the EAGLView.

Oh and one more thing, we celebrated Calvin's birthday today at Infuse (level 14). Tomorrow's Calvin's birthday. Happy birthday Calvin! :)

Monday, November 15, 2010

15 November 2010

Continued working with user interactions today, and the focus was on the swipe gestures to rotate the 3D model that Zac has imported. I was able to get the up, down, right and left rotations up fairly quickly. However, that was not what I wanted. I was only either able to rotate up and down, OR right and left. If for example, I do a right rotation followed by an up rotation, the model goes haywire. It does not rotate properly.

I then moved on to find out how to do simultaneous rotations on both the x and y axes. From there, I looked into matrices as they are the ones controlling the model's transformations. I had a little knowledge on matrices as I took additional maths in secondary school, but I was not very good at it. Did some research by reading through the explanations on iphonedevelopment.blogspot.com blog and the OpenGL ES redbook. Got to understand matrices a bit more, and managed to improve the rotations. There is still some faults here and there though. Have to look into matrices more tomorrow!

Sunday, November 14, 2010

12 November 2010

Gained a better understanding of the glFrustumf today. Had some discussions with Kevin and some colleagues that he brought in. They were really helpful and seriously intelligent.

We realized that the normalized device coordinates always ranges from -1.0 to 1.0 on both the x and y axes. Hence, during the translation of window coordinates to normalized device coordinates, I have to scale the values down. With this, I was able to translate the positions properly.

The next thing that I am trying to solve is the problem with perspective. If the zNear and zFar values are not in the positive range, we would not be able to see any perspective. If we have perspective, the translation of coordinates would not be displayed in the way that we want it to be. Hence, I would have to spend more time trying to understand how this works on Monday.

Spent some time looking into swipe gestures as well. Earlier this week, I was only able to do swiping in one direction, left to right. I found out that we are actually able to swipe up, down, left and right. Now the next thing that I want to look into is if swipe gestures would work when I have pan gestures added to the view. These gestures does not seem to work well on the simulator. I might probably request to try this out on a real device to see if it's a limitation on the simulator.

Thursday, November 11, 2010

11 November 2010

Chanced upon this link as I revisited on drawing 3D cubes using OpenGL ES. He has some useful tutorials there for beginners. I did this thinking that I probably drew the cube wrongly, but that was not the case. Spent more than half of the day trying to draw cubes.

I then decided to revert back to what the template used to create the glFrustumf, and the cube was drawn nicely. However, the coordinate translation went haywire. Therefore, I suspect that my issue has got to do with the creation of the glFrustumf. Hence, I decided to do more research on this to gain a better understanding.

Kevin came in to check on us at the end of the day. I had the direction in mind, but once again, I failed to express what I have in mind properly, which caused confusion for Kevin and Zac. I probably went into too much details too. Kevin then told me to research on the following terms aside from just glFurstumf:
- glMatrixMode
- glViewPort
- glLoadIdentity

Gonna do some research back at home though Kevin says that I can get it done by Monday.

Wednesday, November 10, 2010

10 November 2010

Continued with the movement interaction today. Had a short meeting with Kevin and Zac to discuss on the formula to use for translation. That didn't seem to work well either. We went on to research more on glOrthof and glFrustumf. After some explanation by Kevin, I slowly got to understand more of what they can do.

Hence, I went on to work with glOrthof first, which is for 2D models. I applied the formula that I came up with and finally managed to get what I wanted. From there, I then moved on to make use of glFrustumf, to work with 3D models. All seem to work pretty well, but my cube doesn't seem to be a cube anymore. Or probably it is. Got to look more into this issue and solve it by tomorrow!

Looks like I would have to work with more mathematics in time to come. Wish I had put in more effort in mathematics during my secondary school days :S

Tuesday, November 9, 2010

09 November 2010

Continued working on the model interactions today. I decided to first work on the moving of the model. First I did some simple movement of subviews with the use of UIGestureRecognizer. It has quite a lot of similarities to UITouch.

With that done, I then moved on to try out the same interaction, but with the 3D model. I realized that I was able to do a glTranslatef on the model, however, the model does not seem to appear properly. Then I realized that I would have to do some translation of the coordinates. The UIGestureRecognizer uses the Window coordinate, while OpenGL ES uses the Normalized Device coordinate for model translation. Here are some pictures to illustrate the differences:


Therefore, I proceeded on to work on the coordinates translation. Still have yet to work things out properly. The problem now lies in the part where the model crosses x = 0 and y = 0. There's definitely something wrong with the formula that I have come up with. Gonna have to do more thinking, debugging and researching.

I tried to workaround by making use of UIAnimations. However, while doing that, I came to realize that the rendering method is already making use of UIAnimation.

Monday, November 8, 2010

08 November 2010 - Start of OpenGL ES & Project

Kick started the day by going through Stanford University's podcast lecture on OpenGL ES. Completed some basic sample codes of drawing 3D models and rotating them.

Once that was done, we gave Kevin a call and he came over to have a discussion with us. We discussed on the things to be done for our project and split the work among ourselves. Zac chose to work on the first stream, which is to work with the model. Thus, I would be working on the interaction stream.

Here is the picture of what we have discussed:

I would first have to complete the 3 basic interactions, namely rotation by flicking, scaling by pinching, and moving by dragging.

Thursday, November 4, 2010

04 November 2010 - Bonjour Completed! :D

Completed the last stage of our Bonjour application today! :D we were able to do the synchronization of the displays fairly quickly. After that we also spent quite some time on trying to solve the memory leaks that we have. Learnt quite a lot about the Leaks Instrument today. I guess we better do proper memory management next time as we code. If not, we would have to spent a lot of time trying to solve leaks, especially with big projects.

Tried solving the problem with UIAlertView's memory leak, but to no avail. It wasn't much of a big problem though. Found this link while looking for solutions, might come in handy next time :)

This application is just the cherry on the cake. Since we have completed the cherry, it's time for us to start working on the cake. We must complete the cake! Therefore, we are gonna start on OpenGL next week! Sounds fun. Hehe.

Wednesday, November 3, 2010

03 November 2010 - Bonjour, multiple iOS sending data to OSX (listener)

Started the day with the setting up of the MacBook Pro as we need a third simulator to simulate multiple iOS-es. While doing that, we did research on how to go about having our listener receive data from multiple iOS-es. Spent quite a long time trying to do research, but in the end, nothing much could be found. Therefore, we decided to try things out ourselves.

Currently, our application has only instance variables for one NSInputStream and one NSOutputStream. This definitely cannot cater for multiple users. Then I thought that since we have the stream delegate, adding the various different streams into an array whenever there is a new user would not be a problem. I went on to try that out and it really worked. Our server was able to receive data from multiple users.

However, there was still one problem. Data sent from multiple users were merged together as they were all making use of the same instance variable that we have on the server, which is used to store leftover data. We went on to research on how to get the client's device name the moment a connection is established. This way, we would be able to identify the different users and have individual NSInputStreams, NSOutputStreams, as well as leftover data for each user. We spent quite a long time trying to get an answer, but all we could do was to retrieve the client's IP address. From there, we reworked our application in such a way that it takes in the device's IP address instead of the name.

We managed to complete the second stage of the application today - having multiple iOS-es connecting and sending data to the listener (OSX). We would be working on the third stage tomorrow - one client sends data to the server, the server receives the data and sends this data to all connected clients, resulting in synchronized displays on the server, as well as all clients connected to the server.

Tuesday, November 2, 2010

02 November 2010

Kevin discussed with us on some of the things for us to work on. For now, we'll have to work with the Bonjour protocol where one Mac OSX would be the listener, and multiple iOS devices would be able to send data over for interactions to occur.

There are three stages to this. First, we'll have to setup the listener, which is the Mac OSX, and allow a single iOS device to connect to it - e.g. when iPad A does zooming, it would be reflected on the listener's display.

Next, we would have to work on having the listener listen to multiple iOS devices connecting to it - e.g. when iPad A does zooming, it would be reflected on the listener's display. When iPad B does zooming, it would be reflected on the listener's display as well.

Last, we would have to ensure that all devices connected to the listener would be synchronized - e.g. when iPad A does zooming, this would be reflected on the listener's display, as well as iPad B's display. When, iPad B does zooming, it would be reflected on the listener's display, as well as iPad A's display.

Here is the picture showing our milestones:

We have completed the first stage and are now setting up the MacBook Pro to be used in testing multiple iOS devices.

Monday, November 1, 2010

01 November 2010 - Bonjour Application, Verge of Completion?

Completed the splitting and merging of data packets today. It wasn't that much of a trouble. Once we got that going, we were able to solve the rest of our consistency problems over different devices.

After showing our application to Kevin, we are now tasked to include multi-point interactions for the application. Once this is done, version 0.1 of our application would be ready! :D

*edit*
Now we're facing problems with multi-point interactions. The buffer size for data is either too small, or too big. If we set the buffer size to 512, and multi-point interaction occurs, we get an error. If we set it to 1024, and single-point interaction occurs, our application would not run properly as we're sending only small amounts of data.

Friday, October 29, 2010

29 October 2010 - More Bonjour!

As the title says, more Bonjour. We decided to go for a different approach today - to change our server and make use of delegation.

We managed to get the connections set up properly and were able to send and receive data on the client and server side. But, we have one problem. Since we're doing asynchronous data transfer, when the different touch evens are being called rapidly, the data being sent would be merged even before it reaches the receiving end. When this happens, unarchiving the data would cause the program to have errors, and we would not be able to display the user interactions properly.

We tried to solve the problem by applying threading, which I thought would work. Still, it failed. We didn't try doing synchronous data transfer as it would be slow if many users were to connect at the same time. We even tried to skip the unarchiving for data which exceeds a certain number of bytes, which obviously isn't the right way -.- this caused the interface to become jerky as some of the user interactions would be skipped.

Therefore, after doing some research, we realized that manual splitting and merging of the data's bytes has to be done before unarchiving on the receiving end starts. All along we were working based on the misconception that the way we send or receive is incorrect. However, that's not the case. It's actually due the the network that we're sending our data through. If data is being sent at a very fast rate, the data packets tend to be merged on the network. It has got nothing to do with the way we send or receive. It's much more of what should be on the receiving end before unarchiving the data.

Hence, we'll try out the splitting and merging of data packets on Monday to see what happens.

28 October 2010 - Bonjour again

Continued working with Bonjour again. Cracked our heads for the whole day! At first we missed out the code on the listening socket, which led to us not being able to listen to connections coming in. Once we added that in, things worked. We managed to establish the connections today - client joining the server.

However, there seems to be something wrong as the client is able to send the touches to the server, but not the other way round. We suspect that it has got to do with something related to NSOutputStream. Since it's already quite late, we decided to read up more on Bonjour after work.

Wednesday, October 27, 2010

27 October 2010 - Bonjour

Worked on the Bonjour protocol today. Had to get this right as this would be the groundwork for our project that we'll be working on soon in time to come. This time, it would not be the same as the assignments that we did before. Gotta research and try to get things working on our own. Only until the three of us (Kevin, Zac and I) are unable to find a solution, then it can be considered that we failed to do what we wanted. Then that is when we think of alternatives.

Kevin's right, we got to learn to express ourselves more. If we are unable to express ourselves well, we would tend to get the wrong message across, resulting in poor communications. The project could be directed onto another path because we failed to express ourselves well enough. Learnt this important lesson from Kevin today. Thanks Kevin :)

Managed to publish the NSNetService properly today. Gonna work on multiple interactions tomorrow. Hope that we can come up with something good :)

Tuesday, October 26, 2010

26 October 2010 - More on Touch

Continued working with Touch on iOS today. Managed to complete one of the multitouch assignment which requires interaction with irregular shapes before lunch. Zac and I found some classes on the net that catered for buttons with images that have transparent portions. Still don't quite understand how that class works though. The only thing I know is that the class is playing with the color pixels of the button's image. Gotta spend some more time understanding the codes.

Gonna go work on the other touch application in which a shape, say a square, would be able to follow the user's touch on the iDevice.

Just finished the second application, and it works! :D Time to go back and try to understand the classes that I used in the first application.

Watched the lecture on Bonjour protocol while waiting for Kevin to check on us. It's really cool! There are all sorts of things we can do with it. Really up to our imagination.

*edit* added some animations to the second application after Kevin came and looked at our work.

Monday, October 25, 2010

25 October 2010 - NSOperation & Touch

Tasks for the day:
- Watch lecture 14 on Multi-Touch
- Rework assignment 4 part 3 with the use of non-blocking technique
- Work on Multi-Touch Application

Watched the lecture on multi-touch this morning and tried out the demo applications. When we came back for lunch, Kevin took a look at our assignment 4 part 3. For that assignment, out interface was blocked whenever we retrieved data from Flickr. Hence, Kevin gave us a lecture on NSOperations (aka threading).

Kevin also told us about the hierarchy of data retrieval from the fastest to the slowest:
1. Memory
2. Disk
3. Local Network
4. Internet

We learnt that whenever we perform costly operations, it would always be good to make use of threading. This prevents blocking of codes and users would then have a better experience with the application. Else, if the application is being blocked due to the long code execution, the application would be unresponsive. The next thing on the user's mind would be to exit the application.

We managed to rework the application to remove blocking, and the whole process has become much more ideal. Here are some references used during the reworking of the application:


Now, we're given 2 multi-touch assignments to work on. We would be working on the second one first, which seems to be more difficult.

Got this useful link on the net to deal with irregular-shaped buttons. Gonna try it out tomorrow. Have to go back home and create the images.

Friday, October 22, 2010

22 October 2010

It's Friday again~

Kevin briefed us on what to do in the morning. We'd have to go through the Xcode preferences to see what each settings does, as well as to review the applications that we have programmed previously. After reviewing, we'll be cleaning up these codes so that they can be used for future references. I'm sure at some point when we're working on our upcoming projects, we're bound to forget some things. It's easier to recall what we have done ourselves anyway.

Zac found a whole list of Xcode settings and shortcut keys. Very useful! Check it out in the useful links section "List of Xcode Settings & Shortcut keys". OH, and after exploring Xcode's preferences, finally found the setting to have all windows in one. That makes the viewing much more cleaner! Do this by going to menu bar / Xcode / Preferences / General tab and change Layout to "All-In-One".

Went for lunch with Bernard and Zac today. Ate at the foodcourt on the first floor and went to walk around in the market place (basement). Bernard took us to the Sky Garden at the 23rd floor. There's such a nice view there, but the haze simply ruined it. -.-

Watched the lecture done by a guest lecturer who talks about the basics - getting help and debugging.

So far, life @ IHPC is real good. I've been learning new stuff and making progress everyday. Although it can be tiring at times, but still, we've to persevere. These stuff that we learn will definitely be put into use when we work on our actual project. Looking forward to that day.

Starting on Touch next week, one of the major components that we'll be interacting with in out project. Gotta have a goooooooood understanding on this part.

Thursday, October 21, 2010

21 October 2010

Continued working on assignment 4 part 3 (aka assignment 6) today. Managed to have more progress already. Learnt how to pull data from Flickr and populate the data in our application. I was also able to plot the pin locations on the MapView with the use of Flickr's photo geo-locations.

Zac met with a "crazy" problem while trying to add in a framework. 58 errors, crazy - Yes? No? After spending around 1 hour trying to debug, Kevin came to the rescue. Haha! Kevin the master programmer?

We tried to set up FaceTime on the Mac. But it doesn't seem to be calling the phone and the Mac at the same time. We're gonna go through the keynote tomorrow to find out more.

Oh yeah, gonna have to spend some time going through Xcode settings. Really important!

Wednesday, October 20, 2010

20 October 2010

Tasks for the day:
- Go through lectures 10, 11 and 12
- Complete assignment 4 part 3 (aka assignment 6)

Today seems like a very long day... 3 lectures in a row! I got lost when watching the lectures. However, once I started working on the assignment, I was able to recap on what was being taught. I went through parts of the lecture again just to follow the demos. The demos done in these 3 lectures were very useful. It helped a lot when working on the assignment.

However, Zac and I are now stuck. The assignment we're working on has got to do with Flickr. We tried out the given methods to retrieve the photos from Flickr but nothing worked. Then finally I realized that we are missing the class "FlickrAPIKey.h". I encountered this problem earlier when starting on this assignment, but somehow the problem just went away. Then when I tried shifting the classes around to organize them, the problem came back.

We put in that missing class, but still needed that API key in order to process the retrieval of information. We tried to register for an API key but the server's not processing our request for the API key. Guess we'll just have to wait till tomorrow.

Time now: 5.40 pm

Used this link today to learn about creating delegate classes:

Tuesday, October 19, 2010

19 October 2010 - More Core Data

Been working with Core Data for the whole day! Finally completed Assignment 4 Part 2. The way of adding data into Core Data is different from what we did in our previous applications via Visual Studio. The toughest part of this assignment was to know how to write and retrieve proper data in/out of Core Data. Once we got the concept right, the rest of the assignment wasn't much of a problem at all.

We thought that the way we retrieved and stored data with Core Data was the right way yesterday. However, there were some issues that surfaced after some analysis. What we have done is to add the separate entities, namely Person and Photo. However, all that we need to do is to actually add the Person entity, as Person contains a set of Photos. We just had to properly assign the NSSet of Photo instances to the Person, and add Person into Core Data. The relationship would be established automatically by the database system.

Through much researching and looking at sample codes, as well as Kevin's help, we got to understand how Core Data is being implemented. Zac and I also helped each other in the implementation and debugging. We went ahead to derive an algorithm that saves and retrieves data properly using Core Data. This time round, we're very sure that everything is right.

All along we've been working with delegates, just that I didn't know what that was called - until Kevin explained what delegation was all about. It's one of those design patterns out there, where the methods being implemented by the delegate class would be called by the OS itself. All that is needed for us to do is to implement the required methods, and/or the optional methods. Its really kinda amazing how things work.

I've been learning many new stuff since I came here as an intern on the first day. Everyday is about learning new stuff, not just technically, but about things going on in life as well.

Went for lunch with Kevin, Calvin, Bernard and Zac today. Talked about getting cars, houses and even a girlfriend. Interesting to know how they look at things in life. Had a tea break too, at level 14. It's like a common area for all the various different departments. Played pool with Kevin, Calvin, Gim Han and Zac. Man, they played like professionals!

Would be proceeding on to watch more lectures and complete more assignments tomorrow.