User representation test and symbolic realism in VR mobility tasks

Authors:
(1) Raphael Covener Dos Angus;
(2) Joao Madeiras Pereira.
Links table
Abstract and 1 introduction
2 Related Work and 2.1 Virtual Avatar
2.2 The cloud depicting points
3 3.1 test design and preparation
3.2 User representations
3.3 methodology
3.4 virtual environment and 3.5 description tasks
3.6 questionnaires and 3.7 participants
4 Results and discussion, 4.1 user preferences
4.2 Task performance
4.3 Discussion
5 conclusions and references
3 test design
We are studying the effects of realism and perspective in natural tasks. In this section, half of the main aspects of designing the test experience in relation to user representation and task design. The following sub -sections provide the concept of the task, represent the uses used and preparation used on the test mission.
3.1 Preparation
A large -scale setting was used due to two main reasons; First, the fact that the Kinect sensor has an effective scale (from 0.4 meters to 4.5 meters, with reliability skeletons that start at 2.5 meters), and in order to properly evaluate the task of mobility, a greater space was needed. When the user is within the range of the sensors’ operation, the quality of the experiment will be hacked, so setting up a wide line ensures that the full body of the user is always visible with at least one camera. Second, since the perspective of the third person is presented as a single interaction model, the full body of the participant should be visible at all times to avoid holes in acting. The narrow foundation line or the preparation of the individual sensor would capture half of the participant’s body, which greatly exposes the experiment.
Five sensors are installed on the laboratory walls where the study was detained, covering an area of about 4 x 4 meters. Since the proposed mobility tasks were mainly implemented on a line between the two goals, we set the environment in a way like this while carrying out the tasks, the participant was always facing a sensor until his hands were always visible from the perspective of the first person, and returning to the perspective of the third person. The physical setup that has been chosen for our study in Figure 1 can be seen.
3.2 User representations
Regarding the representation of the user, we have chosen three different user representations, after the well -known valley effect. The location of the camera in the perspective of the third person depends on the previous work by Kosch et al. [17]Where the camera is placed over the user’s head to improve spatial awareness.
In all used representations, the Kinect joint position is set and rotated directly in the gods using direct movements.
3.2.1 Summary
The first Avatar is a simplified rear representation consisting of abstract ingredients. The fields have been used for each joint and
The head, cylinders for each bone linking the joints. The joints that are followed by Microsoft Kinect are only represented. The 2 A and 2 B for how this representation was on the views of the first and third person (1PP and 3PP), respectively.
3.2.2 network
The second representation is the embodiment of a realistic network that resembles a person. This acting did not include the animation of individual fingers, as it is not followed by the Kinect sensor. The 2C and 2D forms show this representation on the views of the first and third person (1PP and 3PP), respectively.
3.2.3 the cloud point
This body’s representation depends on a mixture of separate currents of bittime clouds from Microsoft Kinect. Each individual sensor first picks the structural information for every person in his field of view. Below, a point cloud is created from a set of depth and colors that the camera sees, and the relevant points are divided into users from the background.
In several points of reaction, different cameras will send very similar information, and because of the timely restricted nature of our problem, the integration of different flows or the accuracy of repetition is not made. We have chosen the implementation of simple metabolism technology that takes into account what the body parts are more related to the task offered. Using the captured skeleton information, we attribute different priorities to each joint according to the user specified parameters. For virtual reality scenario, more valuable information about hands was found, and head information has been ignored.
Since each sensor can be linked to one computer, we transfer skeleton data and cloud dots across the network to the host computer where the application is turned on. For each point, we move (x, y, z, R, G, B, Q) which are 3D coordinates, one color indicates high or low quality. This last part is necessary in order to adjust the application parameters in the host application. While high quality points are more comparatively assembled, which requires smaller sizes than Splat, areas from which samples are used should be used to create closed surfaces.
Each position is formed in advance on the host computer after a calibration step. The data is then analyzed through the network and presented in the environment using surface alignment splats. For reaction purposes, transmitted skeleton information is used. We are able to successfully analyze and give the avatar sent at 30 frames per second, allowing a clean reaction on the user side. The 2e and 2F forms show this representation on the views of the first and third person.