i'm working on small wpf desktop app track robot. have kinect windows on desk , able basic features , run depth camera stream , rgb camera stream.
what need track robot on floor have no idea start. found out should use emgu (opencv wrapper)
what want track robot , find it's location using depth camera. basically, it's localization of robot using stereo triangulation. using tcp , wifi send robot commands move him 1 place other using both rgb , depth camera. rgb camera used map object in area robot can take best path , avoid objects.
the problem have never worked computer vision before , it's first, i'm not stuck deadline , i'm more willing learn related stuff finish project.
i'm looking details, explanation, hints, links or tutorials achieve need.
thanks.
robot localization tricky problem , myself have been struggling months now, can tell have achieved have number of options:
- optical flow based odometery: (also known visual odometry):
- extract keypoints 1 image or features (i used shi-tomashi, or cvgoodfeaturestotrack)
- do same consecutive image
- match these features (i used lucas-kanade)
- extract depth information kinect
- calculate transformation between 2 3d point clouds.
what above algorithm doing is trying estimate camera motion between 2 frames, tell position of robot.
- monte carlo localization: rather simpler, should use wheel odometery it. check this paper out c# based approach.
the method above uses probabalistic models determine robot's location.
the sad part though libraries exist in c++ need easily, wrapping them c# herculean task. if can code wrapper, 90% of work done, key libraries use pcl , mrpt.
the last option (which far easiest, inaccurate) use kinectfusion built in kinect sdk 1.7. experiences robot localization have been bad.
you must read slam dummies, make things monte carlo localization clear.
the hard reality is, tricky , end doing yourself. hope dive vast topic, , learn awesome stuff.
for further information, or wrappers have written. comment below... :-)
best
Comments
Post a Comment