diff --git a/_posts/2024-03-20-vid2real.md b/_posts/2024-03-20-vid2real.md index 38911c2..200bf46 100644 --- a/_posts/2024-03-20-vid2real.md +++ b/_posts/2024-03-20-vid2real.md @@ -9,6 +9,6 @@ paper: true *The Vid2RealHRI framework was used to design an online study using first-person videos of robots as real-world encounter surrogates. The online study (n = 385) distinguished the within-subjects effects of four robot behavioral conditions on perceived social intelligence and human willingness to help the robot enter an exterior door. A real-world, between- subjects replication (n = 26) using two conditions confirmed the validity of the online study's findings and the sufficiency of the participant recruitment target (22) based on a power analysis of online study results. The Vid2RealHRI framework offers HRI researchers a principled way to take advantage of the efficiency of video-based study modalities while generating directly transferable knowledge of real-world HRI.* -*Collaborative work with Yao-Cheng Chan, Sadanand Modak, Joydeep Biswas, and Justin Hart.* +*Collaborative work with my doctoral student [Yao-Cheng Chan](https://yaochengchan.com/), robotics doctoral student [Sadanand Modak](https://scholar.google.com/citations?user=yEPOWSYAAAAJ&hl=en), and Texas Robotics colleagues [Joydeep Biswas](https://www.joydeepb.com/) and [Justin Hart](http://justinhart.net/).* *This work was supported by NSF grant #2219236 and UT Austin's Good Systems initiative.*