Exploring Haptic Feedback Suits for Virtual Reality Creating a More Immersive Experience
Haptic virtual reality and immersive learning for enhanced organic chemistry instruction
Axon VR (2017) Virtual reality you can feel. https://axonvr.com/#haptx-haptics-evolved. Accessed 26 May 2017
Bazeley P (2010) Book Review: V. L. Plano Clark and J. W. Creswell (Eds.) The Mixed Methods Reader. Thousand Oaks, CA: SAGE, 2008, 617pp. Supplied by Footprint Books, AU$79 (paperback). J Mix Methods Res 4(1):7981. https://doi.org/10.1177/1558689809356926
Article Google Scholar
Buckley P, Doyle E (2016) Gamification and student motivation. Interact Learn Environ 24(6):11621175. https://doi.org/10.1080/10494820.2014.964263
Article Google Scholar
Cambridge Dictionary (2017) Definition of immersive. Cambridge Dictionary. Cambridge University Press. http://dictionary.cambridge.org/dictionary/english/immersive. Accessed 26 May 2017
Choi S, Jung K, Noh SD (2015) Virtual reality applications in manufacturing industries: past research, present findings, and future directions. Concur Eng Res Appl 23(1):4063. https://doi.org/10.1177/1063293X14568814
Article Google Scholar
Christou C (2010) Virtual reality in education. Education. https://doi.org/10.4018/978-1-60566-940-3.ch012
Article Google Scholar
Creswell JW (2014) Research design: qualitative, quantitative and mixed approaches, 4th edn. https://doi.org/10.2307/1523157
Article Google Scholar
Creswell JW, Plano Clark VL (2007) Choosing a mixed method design. In: Designing and conducting mixed methods research. SAGE Publications, Inc, pp 5889. ISBN: 1412927927
Culatta R (2013) ADDIE model. Retrieved from http://www.instructionaldesign.org/models/addie
DextaRobotics (2018) DextaRobotics builds hand haptics device for virtual reality medical learning. https://www.healthysimulation.com/9009/dextarobotics-builds-hand-haptics-device-for-virtual-reality-medical-learning/. Accessed 16 Mar 2018
Dnser A, Steinbgl K, Kaufmann H, Glck J (2006) Virtual and augmented reality as spatial ability training tools. In: Proceedings of the 6th ACM SIGCHI New Zealand chapters international conference on Computer-human interaction design centered HCI - CHINZ06, vol 158, pp 125132. https://doi.org/10.1145/1152760.1152776
Eastwood ML (2013) Fastest fingers: a molecule-building game for teaching organic chemistry. J Chem Educ 90(8):10381041. https://doi.org/10.1021/ed3004462
Article Google Scholar
Erlandson DA, Harris EL, Skipper BL, Allen SD (1993) Doing naturalistic inquiry: a guide to methods. SAGE Publications Inc, Thousand Oaks
Google Scholar
Farra SL, Miller ET, Hodgson E (2015) Virtual reality disaster training: translation to practice. Nurse Educ Pract 15(1):5357. https://doi.org/10.1016/j.nepr.2013.08.017
Article Google Scholar
Fildes N (2015, December) 2016 set to be year virtual reality takes off. Raconteur. https://www.raconteur.net/technology/2016-set-to-be-year-virtual-reality-takes-off. Accessed 26 May 2017
Google (2017) Virtual reality for everyone. Retrieved November 9, 2017. https://vr.google.com/. Accessed 26 May 2017
GoTouch VR Team (2017) Go touch VR. Retrieved November 9, 2017. https://www.gotouchvr.com/. Accessed 26 May 2017
Hamid NSS, Aziz FA, Azizi A (2014) Virtual reality applications in manufacturing system. In: Proceedings of 2014 science and information conference, SAI 2014. pp 10341037. https://doi.org/10.1109/SAI.2014.6918317
Hauptman H (2010) Enhancement of spatial thinking with virtual spaces 1.0. Comput Educ 54(1):123135. https://doi.org/10.1016/j.compedu.2009.07.013
Article Google Scholar
Heinich R, Molenda M, Russell J, Smaldino S (2002) The ASSURE model. In Instructional media and technologies for learning, vol 7
Hyun EHE, Yoon HYH, Son SSS (2010) Relationships between user experiences and childrens perceptions of the education robot. In: Human-robot interaction (HRI), 2010 5th ACM/IEEE international conference on, pp 199200. https://doi.org/10.1109/HRI.2010.5453197
Jong T (2009) Cognitive load theory, educational research, and instructional design: some food for thought. Instr Sci 38(2):105134. https://doi.org/10.1007/s11251-009-9110-0
Article Google Scholar
Koedinger KR, Kim J, Jia JZ, McLaughlin EA, Bier NL (2015) Learning is not a spectator sport: doing is better than watching for learning from a MOOC. In: Proceedings of the second (2015) ACM conference on learning @ Scale-L@S15. pp 111120. https://doi.org/10.1145/2724660.2724681
Kolb AY, Kolb DA (2005) The Kolb learning style inventoryversion 3.1 2005 technical specifications. LSI Technical Manual, pp 172. https://doi.org/10.1016/S0260-6917(95)80103-0
Article Google Scholar
Kolb DA, Boyatzis RE, Mainemelis C (2000) Experiential learning theory: previous research and new directions. Perspect Think Learn Cognit Styl 1(216):227247. https://doi.org/10.5465/AMLE.2005.17268566
Article Google Scholar
Kuo MS, Chuang TY (2016) How gamification motivates visits and engagement for online academic dissemination: an empirical study. Comput Hum Behav 55:1627. https://doi.org/10.1016/j.chb.2015.08.025
Article Google Scholar
Leap Motion (2017) Reach into Virtual Reality with your bare hands. https://www.leapmotion.com/#112. Accessed 26 May 2017
Mahatma Chemistry (2017) MEL chemistry VR. https://melscience.com/vr/. Accessed 26 May 2017
Mei HH, Sheng LS (2011) Applying situated learning in a virtual reality system to enhance learning motivation. Int J Inf Educ Technol 1(4):298302. https://doi.org/10.7763/IJIET.2011.V1.48
Article Google Scholar
Merchant Z, Goetz ET, Keeney-Kennicutt W, Cifuentes L, Kwok O, Davis TJ (2013) Exploring 3-D virtual reality technology for spatial ability and chemistry achievement. J Comput Assist Learn 29(6):579590. https://doi.org/10.1111/jcal.12018
Article Google Scholar
Merriam Webster Dictionary (2017) Definition of IMMERSIVE. Retrieved from https://www.merriamwebster.com/dictionary/immersive
Mestre D, Vercher J-L (2011) Immersion and presence. In: Virtual reality: concepts and technologies. pp 8196. http://www.ism.univmed.fr/mestre/projects/virtualreality/Pres_2005.pdf. Accessed 26 May 2017
Google Scholar
Minogue J, Jones MG (2006) Haptics in education: exploring an untapped sensory modality. Rev Educ Res 76(3):317348. https://doi.org/10.3102/00346543076003317
Article Google Scholar
Molenda M (2003a) ADDIE model. http://www.nwlink.com/~donclark/history_isd/addie.html#FSU. Accessed 26 May 2017
Molenda (2003b) ADDIE Timeline. The performance juxtaposition site. Retrieved from http://www.nwlink.com/~donclark/history_isd/addie.html#FSU
Monahan T, McArdle G, Bertolotto M (2008) Virtual reality for collaborative e-learning. Comput Educ 50(4):13391353. https://doi.org/10.1016/j.compedu.2006.12.008
Article Google Scholar
Mujber T, Szecsi T, Hashmi M (2004) Virtual reality applications in manufacturing process simulation. J Mater Process Technol 155156:18341838. https://doi.org/10.1016/j.jmatprotec.2004.04.401
Article Google Scholar
Nakamura J, Csikszentmihalyi M (2009) Flow theory and research. Oxf Handb Posit Psychol. https://doi.org/10.1093/oxfordhb/9780195187243.013.0018
Article Google Scholar
Norrby M, Grebner C, Eriksson J, Bostrom J (2015) Molecular rift: virtual reality for drug designers. J Chem Inf Model 55(11):24752484. https://doi.org/10.1021/acs.jcim.5b00544
Article Google Scholar
Ornstein A (2006) The frequency of hands-on experimentation and student attitudes toward science: a statistically significant relation (2005-51-ornstein). J Sci Educ Technol 15(34):285297. https://doi.org/10.1007/s10956-006-9015-5
Article Google Scholar
React! Team (2017) React! the organic chemistry board game. Retrieved June 22, 2017, http://reactgame.com/#/. Accessed 26 May 2017
Ritter D, Johnson M (1997) Virtual titrator: a student-oriented instrument. J Chem Educ 74(1):120. https://doi.org/10.1021/ed074p120
Article Google Scholar
Sanders T, Cairns P (2010) Time perception, immersion and music in videogames. In: Proceedings of the 24th BCS interaction specialist group conference, pp 160167
Schell Games (2017) SuperChem VR. Retrieved June 20, 2017, https://www.schellgames.com/games/superchem-vr. Accessed 26 May 2017
Smith C, Agcaoili K, Kannan A (2016) Chemistry Lab VR. Retrieved May 20, 2017, https://devpost.com/software/chemistry-lab-vr. Accessed 26 May 2017
Stohr-Hunt PM (1996) An analysis of frequency of hands-on experience and science achievement. J Res Sci Teach 33(1):101109. https://doi.org/10.1002/(SICI)1098-2736(199601)33:1<101::AID-TEA6>3.0.CO;2-Z
Article Google Scholar
Stone DC (2007) Teaching chromatography using virtual laboratory exercises. J Chem Educ 84(9):14881495. https://doi.org/10.1021/ed084p1488
Article Google Scholar
Stull AT, Barrett T, Hegarty M (2013) Usability of concrete and virtual models in chemistry instruction. Comput Hum Behav 29(6):25462556
Article Google Scholar
Turk DJ, Gillespie-Smith K, Krigolson OE, Havard C, Conway MA, Cunningham SJ (2015) Selfish learning: the impact of self-referential encoding on childrens literacy attainment. Learn Instr 40:5460. https://doi.org/10.1016/j.learninstruc.2015.08.001
Article Google Scholar
Tuveri E, Macis L, Sorrentino F, Spano LD, Scateni R (2016) Fitmersive games: fitness gamification through immersive VR. In Proceedings of the international working conference on advanced visual interfaces-AVI16. pp 212215. https://doi.org/10.1145/2909132.2909287
Unimersiv (2016) Chemistry VR. Retrieved June 20, 2017, https://unimersiv.com/review/chemistry-vr/
Whitson C, Consoli J (2009) Flow theory and student engagement. J Cross-Discip Perspect Educ 2(1):4049. http://jcpe.wmwikis.net/file/view/whitsonconsoli.pdf
Winkelmann K, Scott M, Wong D (2014) A study of high school students performance of a chemistry experiment within the virtual world of second life. J Chem Educ 91(9):14321438. https://doi.org/10.1021/ed500009e
Article Google Scholar
Winter J, Wentzel M, Ahluwalia S (2016) Chairs!: a mobile game for organic chemistry students to learn the ring flip of cyclohexane. J Chem Educ 93(9):16571659. https://doi.org/10.1021/acs.jchemed.5b00872
Article Google Scholar
Woolley J, Sheeley J, Sheeley S (2010) Organic molecule game. http://chem.illinois.edu/omg/index.html. Accessed 26 May 2017
Yiannakopoulou E, Nikiteas N, Perrea D, Tsigris C (2015) Virtual reality simulators and training in laparoscopic surgery. Int J Surg 13:6064. https://doi.org/10.1016/j.ijsu.2014.11.014
Article Google Scholar
HapMotion: motion-to-tactile framework with wearable haptic devices for immersive VR performance experience
Previous researchers introduced a haptic effect using an RGB-image-based visual saliency map that works with audio data Li etal. (2021). In our case, we add 3D motion data to accommodate every movement of the performer into a meaningful haptic effect. We propose a motion salient triangle(MST) that aims to effectively translate characteristics of movements into vibrotactile haptic feedback. In this section, we describe our novel rendering design approach using the proposed MST. Our rendering approach using MST processes spatiotemporal parameters extracted from three-dimensional(3D) joint coordinates. Furthermore, one-dimensional(1D) haptic phantom sensation Park and Choi (2018) is adopted in order to express the detailed flow of the performers motions during consecutive frames. We support robust real-time data processing without much data loss. Therefore, our method achieves a high-level correlation between vibrotactile effects and virtual performers movement to improve the audiences experience in virtual performance.
4.1 Computing motion salient triangle from key element vertices
MST is a key motion event localization method for translating ones motion into vibrotactile feedback. As mentioned in Sect.3.2, a large portion of choreographic and communicative motions includes upper-body movements. Moreover, we analyze that hands joint coordinates play a crucial role in upper-body movements such as Handing the microphone to the audience and Inducing the audience to do Mexican surf. For this reason, we assign hand joint coordinates as active joint coordinates(J_{A}) that represents rich information from motion. In this work, we formulate 3D joint coordinates as J = (x,y,z).
We further define root joint coordinates((J_{R})) and the center of mass of torso coordinates((J_{T})). As shown in Fig.5, (J_{R}) represents a stable point on the shoulder located opposite to (J_{A}) side which reflects the balanced position while carrying out diverse motions. Since the shoulders translation displacement is low compared to other joints during the performers motion, we pick shoulders for (J_{R}) Golomer etal. (2009). (J_{T}) provides a stable point inside of a torso, which mostly sticks to its initial position. Using these two stationary points, our proposed algorithm considers not only micro-level motion flow but also the macro-level stream of movement in continuous frames. We name (J_{A}), (J_{T}) and (J_{R}) as key element vertices which required to form MST.
By concatenating these key element vertices, we generate a 3D polygon. MST-based algorithms employ real-time human body tracking consisting of 32 joints from Azure Kinect DK Microsoft (2020). We designate (J_{A}) as either (text{Hand}_{Right}) or (text{Hand}_{Left}) joint given from the Azure Kinect. We place (J_{R}) in either (text{Shoulder}_{Left}) or (text{Shoulder}_{Right}) which is symmetrically opposite side of the (J_{A}).
Referencing from computing the center of mass of human body segments Adolphe etal. (2017), we first consider the spine naval point as the center of mass in the human body. We carry out (textrm{r} = frac{textrm{R}cdot textrm{l}}{textrm{Q}}) using Unity 3D engine. Here, (textrm{R}) is the value of reactive force, which accounts for value 1, (textrm{l}) refers to the length of the lever, which computed the height of the virtual character, and lastly, (textrm{Q}) refers to a mass of the human body which calculated automatically. We then finally calibrate the center of mass of the torso through the mentioned equation.
MST Dynamic Point After creating the 3D triangle, we compute MST dynamic point((text{MST}_{DP})) as shown in Eq.1. Here, (J_{C}) refers to a centroid in MST. (omega _{Torso}), (omega _{Active}), and (omega _{Root}) indicate the weighting coefficients for each key element vertex. We set (text{MST}_{DP}) with weighted distance from each key element vertices which translates the direct flow of the movement. For the initial frame, we set the (omega) values as 1 and adjust them afterward according to the movement of the performer.
$$begin{aligned} { text{MSP}_{DP} = J_{C} +frac{(J_{A}-J_{C})cdot omega _{Active} + (J_{R}-J_{C})cdot omega _{Root} +(J_{T}-J_{C})cdot omega _{text{Torso}}}{omega _text{Active} + omega _{Root} + omega _text{Torso}}, }end{aligned}$$
(1)
Figure6 illustrates the overall system flow of our proposed haptic translation method described as follows:
- 1.
Collect an offline performers 3D joint data(Azure Kinect) in real-time and transfer joint data to a virtual avatar in the Unity plugin.
- 2.
Compute the (J_{T}) point if the current frame is not the initial frame.
- 3.
Set (text{Joint}_{Right}) and (text{Joint}_{Left}) as potential active joints and keep track of both distances to (J_{T}). By comparing computed distances, we determine the number of active joints, (J_{A}).
- 4.
Compute acceleration of (J_{A})(s). If two (J_{A})s are above the threshold, we set two (J_{A})s for computing (text{MST}_{DP}). If one (J_{A}) is above the threshold, we set one (J_{A}) and (J_{R}).
- 5.
Distribute localization weights to each key element vertex(See 4.2.2) and compute (text{MST}_{DP}) using Eq.1.
- 6.
Process mapping and warping of (text{MST}_{DP})(Fig.7). If the (text{MST}_{DP}) is inside of the bounded area, tactile location is assigned through the 3D warping method. If not, we consider it as surface direct mapping.
- 7.
Set intensity level based on the distance value from (text{MST}_{DP}) to (J_{T}).
- 8.
Increase the haptic intensity level if (text{MST}_{DP})s acceleration goes above the threshold, mainly mentioned in Sect.4.3. If 3D warping vertices to the exception nodes are illustrated in Fig.11a, we employ 1D phantom sensation when adjusting the intensity level(See Fig.11b).
4.2 Translating tactile location
4.2.1 Rendering MST dynamic point
The proposed algorithm maintains the controlled proportion of the distance between (text{MST}_{DP}) and the surface of the torso. In our system, we stream direct vector3-type raycasting to the target point(TP). TP is the centroid among four representative joint coordinates, including the front and back of the torso and left/right shoulders(Fig.7a).
Figure8a shows the top and side view of the warping range and how raycasting stimulates each haptic node. The range of the warping boundary is set based on the range of motion(ROM) data from our previous surveys(See Sect.3.2). Therefore, the performers maximum and minimum X and Z ROM become the range of the X-axis and Z-axis of the warping boundary. We measure the maximum and minimum length between (J_{A}) and the local coordinate of the performers (J_{T}) to define the range of warping availability. In general, the maximum and minimum ranges come out as 185cm and 13cm. If (text{MST}_{DP}) is out of these ranges, we adjust (text{MST}_{DP}) to the closest boundary.
As shown in Fig.8a, there are two exemplary cases for assigning haptic feedback by 3D warping. By applying a homogeneous matrix, we convert (text{MST}_{DP}) from 3D coordinates to the 2D haptic display nodes. Figure8b indicates the surface mapping, which occurs when (text{MST}_{DP}) hits the haptic display node directly. We further explain in more detail about 3D warping and direct surface mapping.
When it comes to the raycasting in the middle of consecutive actuator nodes((text{Node}_{A}) and (text{Node}_{B}), (text{Node}_{C}), (text{Node}_{D})), we deploy modified 1D phantom Park and Choi (2018). Figure8c shows the haptic output after raycasting. As the ray hits among random actuator nodes, it will eventually be actuated at the same time with different intensity levels due to the 1D phantom sensation and 2D Grid-based sensation. We will give a description in Sect.4.3.4 about two main cases of additive sensation.
3D Warping Fig.8a illustrates how we process raycasting to warp the (text{MST}_{DP}) to the haptic display node. The raycasting starts from (text{MST}_{DP}) to the target point. If the ray hits the node and locates within the boundary, we set it as a haptic proxy.
In order to convey a natural and embodied user experience, our system aims for mirrored haptic feedback from the virtual performer. This denotes that audiences feel the flipped feedback of the performers motions as if they are watching the mirror of their performer. We consider that mirrored rendering design enhances the level of embodiment while experiencing the vibrotactile feedback as mentioned in Sect.3.2.
Direct Surface Mapping When (text{MST}_{DP}) is smaller than the minimum warping range shown in Fig.7b, (text{MST}_{DP}) generally locates directly on the surface nodes of the performers torso. In this case, these surface nodes become haptic proxies to transfer tactile feedback as shown in Fig.8b.
Out-of-Range Projection Since our system supports real-time rendering, strong treatment for unexpected cases is inevitable. If (text{MST}_{DP}) has been positioned out of the pre-calculated maximum range, we redefine the excluded (text{MST}_{DP}) to the closest coordinate in the maximum range(Fig.8c).
4.2.2 Integrating motion context to MST dynamic point
To cover the various motion types from the virtual performance, we adjust weight distribution when computing (text{MST}_{DP}). We consider single active joint and dual active joints conditions.
For weight distribution, we compare each appointed (J_{A})s acceleration data in every frame. In terms of the acceleration threshold, we compute the average acceleration for the previous three frames. If the acceleration value of the current frame is higher than the real-time threshold, we append the value of weight ((A_{t-2} - A_{t-1})) on the (J_{A}). Therefore, the calculated weight value will be applied in real time with Eq.1.
Regarding dual active joints, key element vertices consist of two (J_{A}) and sole (J_{T}). In this case, we compute both the acceleration value and distance value. If the acceleration values in the current framet for both active joints are higher than the real-time computing average threshold, we once again define there are two active joints to render. Then, we compute the distance value from left (J_{A}) to (text{MST}_{DP}) and right (J_{A}) to (text{MST}_{DP}) at the same time. By comparing the distance value of each active joint, we distribute the weight value ((A_{t-2} - A_{t-1})) to the joint which records a higher distance from (text{MST}_{DP}). Figure9b adapts the condition for two active joints.
4.3 Translating tactile intensity
4.3.1 Hardware intensity calibration
To translate tactile intensity with a set of hardware, we first define the hardware calibration coefficient(C) to provide precise tactile stimuli. This calibration is to identify the relationship for our inputoutput actuators, which certifies the liable following results. We measured the output acceleration from each eccentric rotating mass(ERM) using a high-precision 9DoF IMU (SparkFun, ICM-20948) while changing the input amplitude. The measured acceleration in each condition was fit to linear interpolation. For the output amplitude, corresponding vibration frequency from the bHaptics vest and sleeve recorded the vibrotactile actuators(range 1.00(sim)4.37G). Here,(G) refers to gravitational acceleration. The most effective vibrotactile frequency for human perception lies between 130 and 230Hz Sun etal. (2022). To satisfy both vibrotactile intensity and frequency, we set the value C as 6 which corresponds to level 6 of bHapticss intensity parameter(3.16G with 142Hz).
4.3.2 Intensity control strategy
To accurately simulate the sensation of the upper-body movement, we adjust the intensity level according to the distance of (text{MST}_{DP}) to (J_{T}). Therefore, controlling a fine level of intensity is necessary Li etal. (2021). We control the ERMs intensity parameter value to effectively convey the performers motions. Depending on the distance value of (text{MST}_{DP}) to (J_{T}), the level of tactile intensity is linearly combined. The larger ROM gets, the higher the tactile amplitude is. By adjusting tactile intensity based on the distance which represents the quantity of the motion from the performer, users would easily notice the flow of movements from the performer. The proposed intensity control strategy would benefit motions that contain precise and dynamic contexts like choreographic and communicative motions.
$$begin{aligned} I_{t} = left( alpha cdot D_{t} cdot C + (1-alpha ) cdot I_{t-1} right) end{aligned}$$
(2)
Equation2 is based on an exponential filter that uses exponentially weighted averaging to produce an output value. Here, (I_{t}), (alpha), (D_{t}) refer to the total intensity value, smoothing factor, and distance measurement of two vertices accordingly. In our work, we set (alpha) as 0.5 where we distribute the same importance weight to the current frame(t) and the previous frame(t-1).
As mentioned previously, we adjust C to transfer the intended tactile intensity to the bHaptics vest and sleeve. Thoroughly stated in Sect.4.3.1, we confirm bHapticss level 6 intensity parameter is the most comfortable value Maereg etal. (2017). Thus, we set the value C as level 6.
4.3.3 Intensity distribution based on motion dynamics
As previously mentioned,Distance((J_{T}) - (text{MST}_{DP})) indicates the distance between (text{MST}_{DP}) and (J_{T}) as shown in Fig.10. We increase the intensity as the distance gets larger which intuitively conveys the performers movement into a tactile experience. We also accommodate dynamic motions by controlling intensity level based on the active joints((J_{A})) acceleration. The wholesome intensity is modified if the acceleration exceeds the set threshold which is the mean acceleration value from the recent three frames. If the acceleration of the current frame exceeds, maximum level 2(1.23G with 103Hz) intensity is added regarding the minimum value of human noticeable intensity value Verrillo (1966). Therefore, the maximum intensity will be accounted for in intensity level 8.
In a particular example, regarding some communicative and choreographic motions, whose active joint(s) has the same speed but translates forward or backward in continuous frames, it is inevitable to render the different levels of intensity on a haptic display. As our demonstrating vibrotactile intensity translation reflects the amount of displacement between two main element vertices, we guarantee to improve the different types of motions either that account for big translation but also small translation.
4.3.4 1D phantom and 2D grid-based tactile sensation
In order to convey subtly ruled intensity, we provide the phantom sensation inspired by Park and Choi (2018). When the ray cast hits the computed bounded area, which is settled in the width of length between adjacent nodes divided by ten units(Unity 3D), we add the supplement vibration intensity to the primarily computed intensity. The intensities for each node will be adjusted along with the divided units, gained by multiplication of valueK, the distance between nodes, and the normalized portion of distance(a_{n}).
Figure11a implies the 1D phantom sensation in between two consecutive nodes. By tracking the destination of the ray based on the normalized distance portion, the closest node to the ray will be designated as the main node and regarded as the starting point with the coordinate of (0,0). If the ray hits near (text{Node}_{A}), the intensity level will increase with the computed value, (textrm{K} cdot (1-alpha _{n})) (text{Node}_{B}) gains (textrm{K} cdot alpha _{n}), while (text{Node}_{A}) gains (textrm{K} cdot (1-alpha _{n})).
In the case of 2D grid-based tactile sensation, Fig.11b indicates cases when a ray hits among four adjoined nodes. This rendering rule is extended from the previously mentioned 1D phantom sensations. We distribute the computed intensity separately to four nodes located near the perceived node by following functions. In 2D grid-based sensation, the closest node to the destination of the ray is regarded as the main node((text{Node}_{A})). We examine three correlation sets between the main node and supplementary nodes((text{Node}_{B}), (text{Node}_{C}), (text{Node}_{D})). Following two nodes (text{Node}_{B}), (text{Node}_{C}) comply with the rule of 1D phantom sensation. With respect to the intensity of (text{Node}_{D}), we averaged the distributed value of (text{Node}_{B}) and (text{Node}_{C}), which allows experiencing the continuity of transition in consideration of connected nodes. Regarding the intensity of (text{Node}_{A}), we arranged the multiplying coefficient value to a properly measured value, 0.5 in order to prevent the node from high-intensity saturation.