Saturday, January 14, 2012

Robots!

I've been wanting to branch out my skills recently and ended up getting the robotics bug. It combines mechanical engineering, electrical engineering and software engineering all into one awesome package. I finally bit the bullet and ordered my first Arduino micro-controller from http://www.trossenrobotics.com/ located up in Chicago and started building my first robot. I'm going to start keeping a record of what I'm doing. This first post is just going to be a mass update to where I am today.

Chassis Design

I mocked up a simple chassis design using Maya, it's nothing special but gave me a general idea about proportions and the basic mechanics. Here's the basic sketch.

Like I said, nothing special but it helped me get a basic idea of how this thing is gunna work. As of right now, it's a cardboard and wooden dowel chassis. The chassis is about 18" long. The chassis is being driven by one powered wheel in the center, the 2 wheels on the back are unpowered, as are the two wheels in the front. I'm rigging up a pretty basic steering column for turning. I got the idea from a RC car I took apart. The two wheels in the back are really only there for stability.

The red and black mess up front is the pan and tilt camera kit with a salvaged Droid attached. More on that later.

Light Blue are the two servos I'll be using.

The green rectangles are the Arduino Microcontroller and the breadboards for wiring

The purple cubes are two 3300 mAh 6v rechargeable NiMH batteries used for powering the servos and 1 9v for powering the Arduino.

Electronics
For my micro-controller I'm using the Arduino Uno R3 with the Xbee wireless shield for remote communication. As for my servos I'm using Hitec 322HD Delux with Karbonite Gears. The pan and tilt requires 2 unmodified servos, the drive train requires one modified servo that allows it to rotate a full 360 degrees. Modifying my first servo took some time and it's nowhere near perfect. The potentiometer is glued in at roughly 2 degrees off from center, thankfully enough it's easy to fix in code on the micro-controller. As for sensors, I have some photosensors, a gyro and sound sensor. I bought the grove shield which allows me to plug in the gyro and sound sensor fairly easily. I plan on purchasing an IR sensor for collision detection.

One of the main things I wanted my robot to do was to be able to communicate back to my host computer with minimal lag, I also wanted to be able to remotely control it as well as get live video feedback. The first two parts are fairly simple, the 3rd gets a little more tricky. After searching for quite some time on a decent robotics camera, the best I came up with is low-res camera that is only capable of taking still shots, not exactly what I wanted. It seems the major limitation to this is the processing speed of the micro-controller. Because of this, I had to get creative. My first idea was to use a wireless webcam. As I started googling a Android app popped up that allowed me to turn my Android phone into a wireless webcam. The app is called IP Webcam https://market.android.com/details?id=com.pas.webcam&hl=en. I turned on wifi on my phone, and was able to connect instantly to my Android powered webcam. The app starts up a webserver for you, and if you open up the correct ports on your router, anyone can view it. It's also capable of being password protected. The developers website also has a link to a PC side application that allows you to use your new webcam stream as a simulated hardlinked webcam. I'll explain that more later.

Because I plan on expanding this in the future, I'm using breadboards for the majority of the development. This first model is an Alpha so I don't plan on getting too rough with it, so the breadboards should be fine. I wanna keep this as modular as possible for future expansion.

I started doing a mechanics and electronics test. Here are my results

Drive Servo and Steering Servo Test, no loads
I accidentally turned on a weird filter, so the video colors are weird but you can get the idea

Drive Servo Torque Test
Here I'm testing my general controls as well as the torque and movement of my drive servo. It appears to work well until I get it stuck on my headset. I also wanted to test the wireless responsiveness and to ensure my settings were all connect. You can see that the Arduino is not physically connected to the host PC.

Pan and Tilt Camera Test
Here I'm testing my controls and mechanics for controlling the pan and tilt camera. I wanted to ensure that the servos were setup correctly and powerful enough to move the phone. As you can see, no problems at all.


Programming
My primary development is going to take place in the Arduino language for the micro-controller and python for logic processing and control. Arduino is based on C and is fairly straightforward.

Arduino has a large following in the robotics community and there are a multitude of code examples and tutorials, and the documentation is pretty robust. To communicate with the micro-controller I am using SoftwareSerial which sets up a pretty simple serial server which I can connect to using python. For right now I won't post any code examples as I'm still developing my libraries and the code is pretty much just for testing purposes right now, as I get farther along I will post code updates

As far as Python goes I'm using a few non-standard modules to achieve what I want.
First and foremost to make the serial connection I am using a module called pySerial, found at http://pyserial.sourceforge.net/. This module is the basis for my Arduino library I'm building to do common functions.

For video processing I am using the Python modules of OpenCV ( Open Source Computer Vision ) http://opencv.willowgarage.com/wiki/. OpenCV is a pretty powerful library specifically designed for computer vision processing. It has built in methods for shape and edge detection as well as the Haar Object Detection Algorithm, which allows me to quickly process and find shapes such as faces, eyes, or other human body parts.

For webcam communication I am using the VideoCapture module http://videocapture.sourceforge.net/.

So a small caveat to using VideoCapture and OpenCV. OpenCV has built in methods for retrieving webcam feeds, but unfortunately only appears to work with DirectShow enabled cameras. Enabling other capture methods requires a full rebuild and recompile of OpenCV, which kinda seemed like overkill. That's when I ran into the VideoCapture module which works on most webcam feeds outta the box. Now, VideoCapture uses PIL to return images. When you request an image from the camera, PIL automatically flips the blue and red channels for you when you are on windows, unfortunately OpenCV does the same thing. So when I would feed the processed image back to the OpenCV frame, my colors would be reversed. Scouring the internet, I finally found a solution. I needed to convert the PIL image into a string anyway, so this method worked out for me.

img.rotate( 180 ).tostring( )[::-1]

The key point here is [::-1]. That flips the channels back to the original state, allowing OpenCV to correctly flip them for when it is output to my window.


Finally, if you noticed in my videos I am using an Xbox360 controller to control my servos. To receive input from the controller I am using pyGame http://pygame.org/news.html. Once I receive the input from the joysticks, I simply round the values and fit them into the 0 to 180 range.

Next Steps
My next steps revolve around me finishing the cardboard chassis and getting this things moving around. Based on some of my strength tests I need to reinforce the body by using multiple layers of cardboard. Instead of one solid piece, I may end up breaking the body into segments in order to strengthen the frame.

Once that is completed I need to finish connection all of my electronics and finishing the Arduino program to support all of them. I also plan on building an H-Bridge for my DC Motor so I can beef up the drive on the robot so that it will support a heavier, sturdier chassis in the future. Cardboard is great but it isn't exactly the strongest of materials.

Following that I need to combine my applications for control and input in python.


Conclusion
Well, there it is, my huge post on my beginning adventures into robotics. Look for more in the future as I continue to build this.

Thursday, September 22, 2011

Python Decorators

So it came to my attention yesterday that the use of decorators in python is not extremely well known. A couple years ago I had discovered them and have been using them whenever I can. Decorators are one of those things that you don’t realize you need until you know learn about it.
So what exactly is a decorator?

From the python wiki:
A decorator is the name used for a software design pattern. Decorators dynamically alter the functionality of a function, method, or class without having to directly use subclasses or change the source code of the function being decorated.

To put it simply though, a decorator allows you to execute code before and after a function is called, effectively wrapping the function around other code, dynamically.

For a more practical example, lets using a simple, common task that a python developer would use, timing a function. Without the use of decorators there are multiple ways you can do this, most involve duplicating code. If you want to time a lot of functions, you could be potentially duplicating many lines of code and altering your original source code. Adding and removing the timing functionality could be a very tedious task at the end of the day.

This is what I see timing code most commonly look like in developers who do not know about decorators:

import time
def func_a(*args):
start_time = time.time()
...
end_time = time.time()
print end_time-start_time


Now while the duplication of code isn’t too horrible, imagine if you have hundreds of functions and want to time all of them, you’d have to add that code to all functions you want to time. Quite a daunting task on a large script.
How do I use decorators?

Setup for decorators is actually really simple. Setup consists of two parts, the decorator function and adding the decorator to the function.
Decorator Function

Keeping with the timing example above, the function for a decorator is quite simple. The decorator is a function that takes a function as an argument. Inside of that function is another function that does the wrapping. Using the timing example, this is what a decorator function would look like:

def print_timing(func):
def wrapper(*arg, **kwds):
# Start the time
t1 = time.time()

# Run the function with the same arguments passed in to the original function
res = func(*arg, **kwds)

# Stop the time
t2 = time.time()

# Tell me how long it took
print '%s took %0.3f s' % (func.func_name, (t2-t1))

return res
return wrapper


Now that we have our decorator function, we simply need to decorate the functions which we want to wrap. The python phrase for this is @. In our example it would be @print_time. To decorate the function we place @print_timing on the line directly above the def of the function we want to time. For example:

@print_timing
def func_a(*args):
...

So there you have it, you can now decorate all the functions you want. No matter where the function is called from, python will dynamicaly alter the execution and run print_timing instead.

The issue of decorators was actually brought up to resolve an issue with OpenGL effecting the redraw of WX elements that were being dynamically updated. A decorator function was used to solve the issues, when I had mentioned the solution to a collegue of mine, he had never heard of decorators before, which brought up this whole post.

Enums in Python

As it stands right now, there is no built in functions for enums in Python like there are in many other programming and scripting languages. Enum’s come in handy when you need to set flags or quickly compare objects. A quick search came up with a class and a function that pretty much emulates Enums inside of python. I’ve already started using it in my TCP/IP project I’m working on and it’s working great. Wish I would have found this sooner. Pretty soon I’ll be posting a quick little walk-through of sockets and TCP/IP in python that uses the enum example below. In the past couple of weeks I’ve come to see the huge potential of using sockets in tools, but I’ll cover all that in a different post.

def M_add_class_attribs(attribs):
def foo(name, bases, dict_):
for v, k in attribs:
dict_[k] = v
return type(name, bases, dict_)
return foo

def enum(names):
class Foo(object):
__metaclass__ = M_add_class_attribs(enumerate(names))
def __setattr__(self, name, value): # this makes it read-only
raise NotImplementedError
return Foo()

# Message Type Enum
message_types = enum(("MESSAGE", "STATUS", "NONE"))

# Check the message type against a member of the enum
if (message_type == message_types.MESSAGE):
pass

Getting the angle of two edges

Today I found another issue with my cap piece creation that deals with verts that were placed on an edge and are basically isolated. Some of our game assets require this so I couldn’t just remove isolated verticies, but I needed to come up with a solution to ignore these verts when I am looking for the closest vert to a position. My solution was to determine the angles of the two edges that a vert creates and if the angle came out to be 180° then I know that that vert is a t junction vert.

I figured it be helpful to explain how to determine the angle of two edges, I’ll go over the math for it then show a code example in max.

The first step is to get the two positions of the points that make up your edge. We need this to determine the vector of the edge. The image below shows where each point is, the angle we are trying to determine and the vectors we will determine.



The first step is to determine Vector_A. To determine Vector_A you normalize the result of Position_B – Position_A. The second step is determine Vector_B, this is determined by normalizing the result of Position_C – Position_A.

Now that we have our two vectors, we need to get the dot product of Vector_A and Vector_B. Once we have that, we determine the angle by getting the arc cosign of our dot product. The result of that will be the angle that your two vectors/edges create.

Now for the maxscript:

-- Get the vert position
central_vert_pos = polyOp.getVert

-- Determine if this is a junction vert
edge_list = polyOp.getEdgesUsingVert

if (edge_list.numberSet == 2) then (
edge_verts = #{}

-- Get all the non-central edge verts
for current_edge in edge_list do (
central_vert_list = polyOp.getVertsUsingEdge current_edge
central_vert_list = central_vert_list - #{}
edge_verts += central_vert_list
)

edge_verts_array = (edge_verts as array)

-- Get the vert positions
vert_b_pos = polyOp.getVert edge_verts_array[1]
vert_c_pos = polyOp.getVert edge_verts_array[2]

-- Get the edge vectors
edge_a_vector = normalize (vert_b_pos - central_vert_pos)
edge_b_vector = normalize (vert_c_pos - central_vert_pos)

-- Get the dot product of the two edges
edge_dot = dot edge_a_vector edge_b_vector

-- Get the arc cos of the dot
edge_angle = acos edge_dot
)

MaxScript Set Face Smoothing Groups

One of the artists asked for a script to set all UV Shell’s to different smoothing groups awhile back. Today I finally got the time to take a look at it. Pulling some of the code from TexTools to get UV Shell Elements gave me what I needed to get the Poly Faces in each UV Shell. When I started working on setting the smoothing groups for the UV Shells, all appeared to work, except that I ended up with only 6 smoothing groups on each object. I tried a few different things thinking it was either modifier panel weirdness or I was just applying them wrong. After watching the MaxScript listener while manually setting smoothing groups I realized what I was doing wrong. PolyOp.SetFaceSmoothGroup doesn’t take the integer equivalent of the smoothing group that you want to set, it takes a bit flag. Looking in the MaxScript documentation confirmed this, but it is easy to miss if you aren’t just skimming through it.

So, for anyone who runs into this, the solution is simple

––Set the bit flag for the smoothing group

bit_flag = bit.set 0 true

polyOp.SetFaceSmoothGroup bit_flag

Tuesday, February 23, 2010

Capping a face with a library cap piece

For the past 3 days I've been working on a solution to replace (cap) a face of varying orientations and size with a standardized library of cap pieces using MaxScript. I'm pretty sure I've got all the kinks worked out and it appears to be working as expected.

The steps to do this are as follows (Don't worry, I'll break down each step further down in the post). Note: I don't include a few steps here as they are pretty common sense that you need to them, for example, I don't include deleting the bounding boxes, attaching the objects or deleting the original faces.


  1. Determine the faces that need to be capped

  2. Determine a library cap piece

  3. Determine the face orientation

  4. Determine the center position of the face

  5. Build the random cap object

  6. Create a normal reference poly

  7. Create a bounding box cap piece

  8. Store the custom FFD data for the bounding box object and the cap object

  9. Determine the corner verts on the cap object and adjust the bounding box

  10. Apply the FFD deformation

  11. Flip inverted normals



I know it looks like a lot of steps, but they are all broken down into pretty simple functions. So lets get to the breakdown.


  1. Determine the faces that need to be capped

    This step is pretty simple, at Volition we use Material ID's to differentiate between material types. By knowing the Material ID of the faces I want to cap, I use an EditablePoly method called selectByMaterial. The commands looks like this : .EditablePoly.selectByMaterial .

    Once that command is run, I am able to use a polyOp method to get the selected faces: polyOp.getFaceSelection

  2. Determine a library cap piece

    This step is to project dependant to really be explained, it is all up to how the project wants to build and setup a cap library. I will explain the proposed idea for our library in another post.

    One important thing to note in this section though is the parameters my library pieces need to have to work correctly using my methods. For me there are two important things:

    1. Flagged corner verts using bitFlags

      This step requires the artist designing the cap pieces to select the 4 corner verts on a cap piece and "flag" then using vertex bit flags. The code to do that is below:



      fn set_vertex_bit_flags obj vertex_list bit_flag bit_value = (
      -- Unreserved bits
      if (bit_flag > 24 and bit_flag < 33) then (
      -- Build the bit flag
      bit_to_set = bit.set 0 bit_flag bit_value

      -- Set the vertex flag
      obj.setVertexFlags vertex_list bit_to_set
      ) else (
      messageBox "Invalid bit flag" title:"Invalid bit flag"
      )
      )

      fn get_vertex_bit_flags obj bit_flag bit_value = (
      -- Return value
      verts_with_flag = #{}

      -- Unreserved bits
      if (bit_flag > 24 and bit_flag < 33) then (
      -- Build the bit flag
      bit_to_get = bit.set 0 bit_flag bit_value

      -- Get the verts matching the flags
      verts_with_flag = polyOp.getVertsByFlag obj bit_to_get
      ) else (
      messageBox "Invalid bit flag" title:"Invalid bit flag"
      )

      -- Return the value
      verts_with_flag
      )




    2.  
    3. Triangulated mesh

      Due to how max handles winding orders of faces and poly creating, the mesh needs to be triangulated for the method I use to flip the normals to work. You'll see more on that later. This can be automated though using polyOp.ConnectVertices. Code is as follows:



      polyOp.setVertSelection <selection> #{1..(polyOp.getNumVerts <selection>)}
      <selection>.ConnectVertices()


       



  3. Determine the face orientation

    This one took a little bit to get right, but here are the steps


    1. Get the face normal of the face we are aligning too

      Pretty simple step, polyOp.getFaceNormal

    2. Get the edges from the face

      Again, pretty simple, polyOp.getEdgesUsingFace . This bitArray will be used in a further step

    3. Get the verts from the face

      polyOp.getVertsUsingFace . This bitArray will be used in a further step

    4. Look for the first vert that has 3 edges and store those edges

      This step is to make sure we get a corner vert, not floating vert on an edge. This may be unneccesary on your geometry, but for ours this is a neccessary step

    5. Determine the unused edges

      Subtracting the vert edges from the face edges will result in a bitArray of unused edges

    6. Determine the used edges


      Subtracting the vert edges from the unused edges will result in a bitArray of used edges

    7. Loop through all the used edges and build a list of verts used on each edge

      This will result in an array of coinciding verts that we will use to determine the face's orientation. Basically we are looking for a vert list that looks like the following image:


    8. Build our vectors

      Using the positions of our 3 verts, determine our vectors and determine the right vector on length.

      is determined by subtracting the second vert from the first vert. is determined by subtracting the last vert from the first vert.

      Once that is done, get the absolute length of both vectors, and using that length determine which is going to be used as the right vector.

      When that is determined, normalize both vectors and build your matrix using your two vectors and the face normal:



      matrix3 left_vector right_vector face_normal [0,0,0]


       



  4. Determine the center position of the face

    This is a pretty simple step as well, collect all of the vert positions using polyOp.getVert, add them all together and divide by the number of verts to get the average position (center of the face)

  5. Build the random cap object

    Simply clone the chosen cap object, this could be where you triangulate the cap as well.

  6. Create a normal reference poly

    To properly determine if the normals of the cap object were inverted in the transform, we need to create a reference poly on our cap object to get a baseline normal to compare against. By using the bounding box parameters of the cap object we can create a flat poly and flag the verts for later use. Code as follows:



    -- Get the vert positions
    vert_a_pos = [obj.min.x, obj.max.y, obj.min.z]
    vert_b_pos = [obj.max.x, obj.min.y, obj.min.z]
    vert_c_pos = [obj.max.x, obj.max.y, obj.min.z]

    -- Create the verts
    vert_a_ind = polyOp.createVert obj vert_a_pos
    vert_b_ind = polyOp.createVert obj vert_b_pos
    vert_c_ind = polyOp.createVert obj vert_c_pos

    -- Build the vert array
    vert_array = #(vert_a_ind, vert_b_ind, vert_c_ind)

    -- Make the polygon
    polyop.createPolygon obj vert_array


     

  7. Create a bounding box cap piece

    This is the box that will be used in the FFD (Free Form Deformation) calculation. This is created using the cap pieces bounding box positions. Be sure to reset the XForms and convert to a PolyObject

  8. Store the custom FFD data for the bounding box object and the cap object

    One of my collegues here, Will Smith, created a custom FFD function to use for this. We tried exploring using Max's FFD modifiers, but control was very limited and the coordinate system the FFD's use made this overly complex. We decided the best solution would be to write our own, which turns out wasn't very difficult. Since this isn't my code, I won't post it here, but essentialy we determine how much weight a vert on the FFD object (in this case our bounding box) has on the verts on the deforming object (in this case our cap object) based on the distance between the two positions. One thing to note though is that we needed to slightly scale up the bounding box to avoid division by zero and infinite numbers. I also added some checks in the function to prevent any floaters as well.

    If anyone is wondering about the math, we used the math found in this discussion on cgsociety.org for our baseline.

  9. Determine the corner verts on the cap object and adjust the bounding box

    This is where we properly adjust the bounding box to the corner verts of the face.

    To do this I loop through the verts that make up the outer edges of the face (verts with at least 3 edges) and find the closest vert based on distance. Since I know that the box I create has 8 verts, I am able to move the first 4 verts of the bounding box to the 4 corner positions of the face. The second step to this is to determine the amount I am moving the vert and then applying that amount to the bounding boxes corresponding vert, to get that index simply add 4 to the current bounding box vert you are working with.


  10. Apply the FFD deformation

    Again, using Will Smith's Custom FFD script, we apply the FFD deformation to the cap object according to the adjusted bounding box


  11. Flip inverted normals

    I'm going to avoid going on a "Why I Hate Max" rant here, but I still want to explain something about this step. As anyone who has messed with maxscript knows, having the modifier panel open slows down your maxscript quite a bit, it is always best for performance reason to have the create panel open unless you specifically need something in the modifier panel. Saying that, polyOp has a built in method for flipping face normals, polyOp.flipNormal, unfortunately it doesn't work unless the modifier panel is enabled, which slowed down the execution of my script by quite a large amount. After speaking to Jeff Hanna, he explained that the face normal is derived from the winding order of vertices, which led me to an experiment. The experiment was, would it be faster to rebuild the geometry and transfer the UVs with the correct winding order instead of opening the modifier panel and calling polyOp.flipNormal. What blew me away was the answer was a resounding yes.

    I won't go into the geometry creation or transfering of UV's, but I will explain how the reference poly comes into play here for determining if the normals are reversed.

    To determine if the cap object's normals were reversed in the transform, I get the normal of the face we are applying to the cap object to and the normal of the reference face. I then get the dot product of the reference face's normal and the cap object's face normal.

    If that value is less than 1, than I know that the normal has been reversed and I go ahead and recreate the geometry.



So there you have it, the basics for aligning a cap object to an arbitrary face.

Tuesday, February 16, 2010

HTML in clipboard

It turns out copying/pasting hyperlinks to the windows clipboard isn't as easy as just copying the unformatted HTML. Turns out you need some crazy format inside of the clipboard for an application that handles HTML copy/paste to actually recognize it. Originally I looked for a solution using the Microsoft Office clipboard since that is where the html from the clipboard was going to be pasted to anyways, but it turns out there is no longer an object model for the office clipboard in versions past 2000. I ended up finding the solution to copy/pasting HTML in windows HTML Clipboard format.

This is what the clipboard actually looks like when you have HTML as your format in the clipboard:


Version:0.9
StartHTML:71
EndHTML:170
StartFragment:140
EndFragment:160
StartSelection:140
EndSelection:160
<!DOCTYPE>
<HTML>
<HEAD>
<TITLE> The HTML Clipboard</TITLE>
<BASE HREF="http://sample/specs">
</HEAD>
<BODY>
<UL>
<!--StartFragment -->
<LI> The Fragment </LI>
<!--EndFragment -->
</UL>
</BODY>
</HTML>



Some more information on the HTML clipboard format can be found Here
Standard clipboard formats can be found Here

To implement the HTML clipboard in python, I found a class Phillip Piper had written that does exactly what I need, that code can be found Here.

Also in my search I ran across this python script called PathCatcher - From the Doc String : PathCatcher is a Windows utility that allows one to right-click on
a folder or a file in Explorer and save its path to the clipboard. PathCatcher can be located Here