Wednesday, December 21, 2011

Final (at this point)

Demo Instructions

Sorry the video's not the best, all I had was CamStudio, but below are all the instructions needed to demo on my lab computer 
Actual file hosted on website (815MB): http://kaitlinppollock.com/final.mov

To Demo shader as seen in video: 

Open Maya 
In python script line type "import shader" 
Add the appropriate files (C:\Kaitlin\SeniorProject\MaleModel\Textures)
Choose "Create Shader" -> shader created with name "skinOver"
Apply to object -> Right Click -> Apply Existing Material -> skinOver 
If you added displacement this can be adjusted (no error checking at this point, so if no displacement file was selected please don't click)
To apply exhaustion progression -> Exhaust 
Images for Color, Epidermal, Subdermal, Backscatter will be shown if selected 
3 clicks, choose eyes and mouth -> hit Enter -> there will be a rather long wait as the files are generated 
Repeat for remaining file that pop up 
File ready (for demo purposes only every 10th frame created, so please view 0,10,20,etc -> 490) 


To Demo sweating 

Open Maya 
Select faces that you want to define the surface emitter 
In python script line type "import sweat" 
Wait 
Hit Play 


Code: 
C:\test 
C:\Users\Kaitlin\Documents\maya\2012-x64\scripts

Files:
C:\Kaitlin\SeniorProject

Sweating

Frustrated with the state of the exhaustion progression, I've moved over to the sweating system.
Moving along quite nicely :) let's just hope it stays that way. Currently the system is set up so that the user selects the faces of the face (or whatever area they want sweat to emit from), then run the script and it duplicates the object, deletes the faces that were unselected, make the new object a surface emitter, duplicate and scale up the emitter to make collider, and hide emitter and collider.

[EDIT 6:50pm]
Full system in place minus geometry instancing, progression, and tweaking of variables.

[EDIT 8:00pm]

References: 

Today's show: 3rd Rock from the Sun (Season2.24-26, Season3.1-10)

Tuesday, December 20, 2011

Exhaustion Progression

[1:30pm]
There are several ways I could have gone about this.
If I had wanted to be able to see the progression from Maya's preview window, I would have created all the progression images and the time slider would determine which image texture is shown. 
However, to avoid creating all those images, the progression can only be seen when rendered - the image is created just before being rendered. 
Here's a quick demo, increasing the radius by 100 each jump.
[EDIT 5:25pm]
My goodness, it's always the littlest things that cause the biggest problems.
I finally got the refresh image file working from my tester python script (since it's changing the file at each time step). However, transferring this over to my amalgamated script- still in python - it suddenly does not recognize the code. I tried a hack of reassigning the image file to the same image, hoping that would refresh it-- it did not. Finally got something working where it calls my tester script from the body script. (only works sometimes though... only worked that once....)

[EDIT 7:15pm]
Interesting discovery, the eval command runs fine on a file created manually, but not on a file created from the script, so where's the difference?

[EDIT 7:35pm]
So, it's because I was not connecting the 'place2d texture' node when creating the file node. Worked fine for display, but apparently needed for refreshing the file. So that's working, new typeError problem now. One of those fun ones where python doesn't know that your string is a string even with str() -resolved

[EDIT 8:25pm]
Issues again. The refresh works on the script generated file, but only if I've refreshed a manually made file first. ???? 20min later that's literally the only difference between when the refresh works and when it doesn't.

[EDIT 12:00am]
I have no idea what the issue is. No errors, it calls the functions fine, proper radii and locations and everything, edits the image, but the image doesn't actually change. I hate it. 


[EDIT 21Dec 10:00pm]
The refreshing issue is frustrating me to no end and I know I could spend days trying to debug it. But as this is due in a mere 2 hours, I'm going back to the memory intensive method based on something I've used before and so am fairly confident that it will produce sufficient results. 



Today's Show: 3rd Rock from the Sun (Season 2.1-23)

Monday, December 19, 2011

User Selection

So I finally got user selection working, of a sense.
Brings up a copy of the image at actual size and reads in three selection points. 
However, it does not apply these 3 selection points, but instead only puts the base red on the face. 
Wonderfully enough, it does show the image when called from within maya, although does not automatically exit from the image after hitting enter (as it does when you call it from the command line). As it still lacks any visual feedback this doesn't make it very intuitive. 
It also remains to be seen if it's reading in the selection points properly as it does not print out the positions.

[EDIT]
So issue. The cv window does not show the entire image, only what fits on the rectangular screen. However, the bottom corner still reads as the max height and width of the image. And as the window is supposed to autosize to the image I have no definite ratio to convert the coordinates...

[EDIT 10:45pm]
Got resizing working! Now I have a definite ratio to work with.

[EDIT 10:50pm]
generated by user selection! (not from within maya though, let's check that from maya too!)
note: delighted that the cv display works when called from within maya
however, if you write to a text file from the c++ code that does not work called from maya
(not necessary for this project so that's okay, but just a note)

References:
http://nashruddin.com/eyetracking-track-user-eye.html
http://linuxconfig.org/resize-an-image-with-opencv-cvresize-function

Today's show:
[19 Dec 2011] 3rd Rock from the Sun (Season 1)

Monday, December 12, 2011

Beta Review

Things slowed a little after Beta review as I rushed to catch up on my other school work, but now it's back on track and a race to get everything done by the deadline.
Some finagling with Visual Studio has allowed me to wrap my c++ code in a python module that will run in the windows environment. I now have an option in my maya script that allows you to "exhaust" the texture. Just switches out the textures at this point (no progression), but small steps first.

Goals:
-progression of exhaustion in Maya
-user selection
   .determine if running through python, c++, openCV
   .determine radius based on selection
   note: I have a feeling this will take longer than I want it to, and I may have to decide between continuing this or    
    moving on to other pieces of the project
-Sweating
   .gradient face selection
   .duplication of faces, scaling
   .creation of surface emitter
   .object-particle instancing
   .parenting

[EDIT 13 Dec 2011]
Notes from Beta (comments from Norm and Joe):
-specular based on ambient occlusion (more sweat builds up in creases of the body)
-face drained of color

Sunday, November 27, 2011

Demo Prep

Still don't have the swig code running from cmd or any python code called in a windows environment, so for now I'll just have to call it from cygwin.
-options for creation of many files that have the flush growing (basefile input), or a single image file (basefile input, radius)

Within Maya, open python command line
import shader (will only run once per session, after that Maya stores the code)
to run again reload(shader)
-choose appropriate files (default used if none chosen)
-adjust displacement if desired

Saturday, November 26, 2011

Fun with SWIG

Spent a long while trying to figure out user input from a mouse click. Looked into C++, openCV, and python options, none were working out very well so I've decided to leave that for now. I'll get the system working with text or argument input first.

I've been working and testing the code by just compiling the C++ code, time to make sure I can still swig it into python code. Still worked great just running from the mintty window, but then I tried creating a python file that would call the wrapped code. This proved more difficult that anticipated. In the wrapper commands I was somehow not creating the necessary file for the wrapped code to be treated as an importable module. Lots of googling and reading later I discovered distutils, a python setup file that takes care of all of the necessary flags, files, etc. It builds the file necessary to make an importable module and cuts down on my compilation code as well :
swig -c++ -python example.i
python setup.py build_ext --inplace

sorry no pictures this post :(

[EDIT: 27 Nov 2011]
Well I got a system in maya working that generates a wonderful SSS shader based on user selected images.
Went to build in the flushing generation and even though I got the import working yesterday apparently it only works in a Unix environment. Trying to run through a windows environment I'm back with the same "No module named _example" error...

References:
http://www.swig.org/Doc1.3/Python.html#Python_nn9 (31.2.2 - 31.2.6)


Python GUI References:
http://download.autodesk.com/us/maya/2010help/CommandsPython/textField.html
http://www.rtrowbridge.com/blog/2010/02/maya-python-ui-example/
http://download.autodesk.com/us/maya/2011help/CommandsPython/fileBrowserDialog.html
Good Code to Have: http://mail.python.org/pipermail/python-list/2010-August/1252307.html

Mouse Click Input References:
http://dasl.mem.drexel.edu/~noahKuntz/openCVTut3.html
http://www710.univ-lyon1.fr/~bouakaz/OpenCV-0.9.5/docs/ref/OpenCVRef_Highgui.htm#decl_cvSetMouseCallback
http://www.ida.liu.se/~ETE257/timetable/LecturePythonPygame2.html
GetCursorPos()
http://www.daniweb.com/software-development/cpp/threads/123456
http://www.cplusplus.com/forum/windows/21620/