Introduction
Getting any HMD synced up with the Perception Neuron integration can be a bit tricky. Here is how
we got it done and what we found works the best. There are three core rules worth mentioning:
- The rotation tracker of the HMD should always have priority. Don’t overwrite its rotation values and don’t use the head rotation values from the Perception Neuron system.
- Don’t use the positional tracking of the HMD.
- Never parent the HMD cameras or GameObjects to the skeleton setup.
Inside the examples folder there is an example called: VRExample. It contains a simple script and a
scene showing you how to combine our system with any virtual reality HMD. The example script works in
the following ways:
- On every new frame we position the VR Camera Rig to the same position as the head target
object. - This target object is an empty GameObject inside the Head skeleton hierarchy. We use the
target object to provide an easy way to define an offset and to set the correct position on the
head. - Since we never change the rotation of the VR Rig, you need to reset its tracker once you have
the HMD on your head. This way the rig will be aligned correctly with your virtual body. Make
sure you’re facing forward in a T-Pose manner when resetting the VR tracking. Press the R key
on your keyboard to reset VR tracking.
TIPS
For users attempting to map a HMD headset with our mocap data, please see the following steps.
After connecting to VR, the camera's visual field will be driven by VR again, which will not follow the robot's head. You can add two parent nodes to the camera like this:
parent1
--paretn2
----camera
parent2 substitute for camera:
parent2.localposition = - camera.localposition, parent2.localrotation = -camera.localrotation;
parent1 substitute for robat.head:
parent1.position = robat.head.position , parent1.rotation= robat.head.rotation