10 results

Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments

Conference Proceeding
Casas, L., Mitchell, K., Tamariz, M., Hannah, S., Sinclair, D., Koniaris, B., & Kennedy, J. (in press)
Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments.
We consider practical and social considerations of collaborating verbally with colleagues and friends, not confined by physical distance, but through seamless networked telepr...

DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences

Conference Proceeding
Koniaris, B., Sinclair, D., & Mitchell, K. (in press)
DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences.
DanceMark is an open telemetry framework designed for latency-sensitive real-time networked immersive experiences, focusing on online dancing in virtual reality within the Dan...

Embodied online dance learning objectives of CAROUSEL +

Conference Proceeding
Mitchell, K., Koniaris, B., Tamariz, M., Kennedy, J., Cheema, N., Mekler, E., …Mac Williams, C. (2021)
Embodied online dance learning objectives of CAROUSEL +. In 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (309-313). https://doi.org/10.1109/VRW52623.2021.00062
This is a position paper concerning the embodied dance learning objectives of the CAROUSEL + 1 project, which aims to impact how online immersive technologies influence multiu...

Method for Efficient CPU-GPU Streaming for Walkthrough of Full Motion Lightfield Video

Conference Proceeding
Chitalu, F. M., Koniaris, B., & Mitchell, K. (2017)
Method for Efficient CPU-GPU Streaming for Walkthrough of Full Motion Lightfield Video. In CVMP 2017: Proceedings of the 14th European Conference on Visual Media Production (CVMP 2017). https://doi.org/10.1145/3150165.3150173
Lightfield video, as a high-dimensional function, is very demanding in terms of storage. As such, lightfield video data, even in a compressed form, do not typically fit in GPU...

IRIDiuM+: deep media storytelling with non-linear light field video

Conference Proceeding
Kosek, M., Koniaris, B., Sinclair, D., Markova, D., Rothnie, F., Smoot, L., & Mitchell, K. (2017)
IRIDiuM+: deep media storytelling with non-linear light field video. In SIGGRAPH '17 ACM SIGGRAPH 2017 VR Village. https://doi.org/10.1145/3089269.3089277
We present immersive storytelling in VR enhanced with non-linear sequenced sound, touch and light. Our Deep Media (Rose 2012) aim is to allow for guests to physically enter re...

Real-time rendering with compressed animated light fields.

Conference Proceeding
Koniaris, B., Kosek, M., Sinclair, D., & Mitchell, K. (2017)
Real-time rendering with compressed animated light fields. In GI '17 Proceedings of the 43rd Graphics Interface Conference. , (33-40). https://doi.org/10.20380/GI2017.05
We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion....

IRIDiuM: immersive rendered interactive deep media

Conference Proceeding
Koniaris, B., Israr, A., Mitchell, K., Huerta, I., Kosek, M., Darragh, K., …Moon, B. (2016)
IRIDiuM: immersive rendered interactive deep media. https://doi.org/10.1145/2929490.2929496
Compelling virtual reality experiences require high quality imagery as well as head motion with six degrees of freedom. Most existing systems limit the motion of the viewer (p...

Stereohaptics: a haptic interaction toolkit for tangible virtual experiences

Conference Proceeding
Israr, A., Zhao, S., McIntosh, K., Schwemler, Z., Fritz, A., Mars, J., …Mitchell, K. (2016)
Stereohaptics: a haptic interaction toolkit for tangible virtual experiences. In SIGGRAPH '16: ACM SIGGRAPH 2016 Studio. https://doi.org/10.1145/2929484.2970273
With a recent rise in the availability of affordable head mounted gear sets, various sensory stimulations (e.g., visual, auditory and haptics) are integrated to provide seamle...

User, metric, and computational evaluation of foveated rendering methods

Conference Proceeding
Swafford, N. T., Iglesias-Guitian, J. A., Koniaris, C., Moon, B., Cosker, D., & Mitchell, K. (2016)
User, metric, and computational evaluation of foveated rendering methods. In SAP '16 Proceedings of the ACM Symposium on Applied Perception. https://doi.org/10.1145/2931002.2931011
Perceptually lossless foveated rendering methods exploit human perception by selectively rendering at different quality levels based on eye gaze (at a lower computational cost...

Real-time variable rigidity texture mapping

Conference Proceeding
Koniaris, C., Mitchell, K., & Cosker, D. (2015)
Real-time variable rigidity texture mapping. In CVMP '15 Proceedings of the 12th European Conference on Visual Media Production. https://doi.org/10.1145/2824840.2824850
Parameterisation of models is typically generated for a single pose, the rest pose. When a model deforms, its parameterisation characteristics change, leading to distortions i...