Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About Me
This is a page not in th emain menu
Tao Lin, Qi Wu, Jun Liu, Ziliang Shi, Pei Nian Liu, and Nian Lin
Published in The Journal of Chemical Physics, 2015
Qi Wu, Will Usher, Steve Petruzza, Sidharth Kumar, Feng Wang, Ingo Wald, Valerio Pascucci, and Charles D. Hansen
Published in The Eurographics Symposium on Parallel Graphics and Visualization (EGPGV), 2018
Feng Wang, Ingo Wald, Qi Wu, Will Usher, and Chris R. Johnson
Published in IEEE Visualization Conference (VIS), 2018
Mengjiao Han, Ingo Wald, Will Usher, Qi Wu, Feng Wang, Valerio Pascucci, Charles D. Hansen, and Chris R. Johnson
Published in The Eurographics Conference on Visualization (EuroVis), 2019
Qi Wu, Tyson Neuroth, Oleg Igouchkine, Konduri Aditya, Jacqueline H. Chen, and Kwan-Liu Ma
Published in IEEE Symposium on Large Data Analysis and Visualization (LDAV), 2020
Qi Wu, Michael J. Doyle, and Kwan-Liu Ma
Published in The Eurographics Symposium on Parallel Graphics and Visualization (EGPGV), 2022
Qi Wu, Joseph A. Insley, Victor A. Mateevitsi, Silvio Rizzi, and Kwan-Liu Ma
Published in IEEE Symposium on Large Data Analysis and Visualization (LDAV) Poster, 2022
David Bauer, Qi Wu, and Kwan-Liu Ma
Published in IEEE Visualization Conference (VIS), 2022
Stefan Zellmann, Qi Wu, Kwan-Liu Ma, and Ingo Wald
Published in The Eurographics Conference on Visualization (EuroVis), 2023
David Bauer, Qi Wu, and Kwan Liu Ma
Published in IEEE Visualization Conference (VIS), 2023
Qi Wu, David Bauer, Yuyang Chen, and Kwan-Liu Ma
Published in @arxiv, 2023
Qi Wu, David Bauer, Michael J. Doyle, and Kwan-Liu Ma
Published in IEEE Transactions on Visualization and Computer Graphics (TVCG), 2023
Stefan Zellmann, Qi Wu, Alper Sahistan, Kwan-Liu Ma, and Ingo Wald
Published in The Eurographics Conference on Visualization (EuroVis), 2024
Qi Wu, Joseph Insley, Victor Mateevitsi, Silvio Rizzi, Michael Papka, and Kwan-Liu Ma
Published in @TVCG, 2024
Ingo Wald, Stefan Zellmann, Jefferson Amstutz, Qi Wu, Kevin Griffin, and Milan Jaros
Published in @LDAV, 2024
We propose and discuss a paradigm that allows for expressing data-parallel rendering with the classically non-parallel ANARI API. We propose this as a new standard for data-parallel rendering, describe two different implementations of this paradigm, and use multiple sample integrations into existing applications to show how easy it is to adopt, and what can be gained from doing so.
Qi Wu*, Janick Martinez Esturo*, Ashkan Mirzaei, Nicolas Moenne-Loccoz, and Zan Gojcic (* Equal Contribution)
Published in @CVPR, 2025
Daniel Zavorotny, Qi Wu, David Bauer, and Kwan-Liu Ma
Published in @EGPGV, 2025
Machine learning has enabled the use of implicit neural representations (INRs) to efficiently compress and reconstruct massive scientific datasets. However, despite advances in fast INR rendering algorithms, INR-based rendering remains computationally expensive, as computing data values from an INR is significantly slower than reading them from GPU memory. This bottleneck currently restricts interactive INR visualization to professional workstations. To address this challenge, we introduce an INR rendering framework accelerated by a scalable, multi-resolution GPU cache capable of efficiently representing tera-scale datasets. By minimizing redundant data queries and prioritizing novel volume regions, our method reduces the number of INR computations per frame, achieving an average 5x speedup over the state-of-the-art INR rendering method while still maintaining high visualization quality. Coupled with existing hardware-accelerated INR compressors, our framework enables scientists to generate and compress massive datasets in situ on high-performance computing platforms and then interactively explore them on consumer-grade hardware post hoc.
Hamid Gadirov, Qi Wu, David Bauer, Kwan-Liu Ma, Jos B.T.M. Roerdink, and Steffen Frey
Published in @EuroVis, 2025
We present HyperFLINT (Hypernetwork-based FLow estimation and temporal INTerpolation), a novel deep learning-based approach for estimating flow fields, temporally interpolating scalar fields, and facilitating parameter space exploration in spatio-temporal scientific ensemble data. This work addresses the critical need to explicitly incorporate ensemble parameters into the learning process, as traditional methods often neglect these, limiting their ability to adapt to diverse simulation settings and provide meaningful insights into the data dynamics. HyperFLINT introduces a hypernetwork to account for simulation parameters, enabling it to generate accurate interpolations and flow fields for each timestep by dynamically adapting to varying conditions, thereby outperforming existing parameter-agnostic approaches. The architecture features modular neural blocks with convolutional and deconvolutional layers, supported by a hypernetwork that generates weights for the main network, allowing the model to better capture intricate simulation dynamics. A series of experiments demonstrates HyperFLINT’s significantly improved performance in flow field estimation and temporal interpolation, as well as its potential in enabling parameter space exploration, offering valuable insights into complex scientific ensembles.
David Bauer, Qi Wu, Hamid Gadirov, and Kwan-Liu Ma
Published in @VIS, 2025
Arisa Cowe, Tyson Neuroth, Qi Wu, Martin Rieth, Jacqueline Chen, Myoungkyu Lee, and Kwan-Liu Ma
Published in @arxiv, 2025
We present HyperFLINT (Hypernetwork-based FLow estimation and temporal INTerpolation), a novel deep learning-based approach for estimating flow fields, temporally interpolating scalar fields, and facilitating parameter space exploration in spatio-temporal scientific ensemble data. This work addresses the critical need to explicitly incorporate ensemble parameters into the learning process, as traditional methods often neglect these, limiting their ability to adapt to diverse simulation settings and provide meaningful insights into the data dynamics. HyperFLINT introduces a hypernetwork to account for simulation parameters, enabling it to generate accurate interpolations and flow fields for each timestep by dynamically adapting to varying conditions, thereby outperforming existing parameter-agnostic approaches. The architecture features modular neural blocks with convolutional and deconvolutional layers, supported by a hypernetwork that generates weights for the main network, allowing the model to better capture intricate simulation dynamics. A series of experiments demonstrates HyperFLINT’s significantly improved performance in flow field estimation and temporal interpolation, as well as its potential in enabling parameter space exploration, offering valuable insights into complex scientific ensembles.
Haithem Turki*, Qi Wu*, Xin Kang, Janick Martinez Esturo, Shengyu Huang, Ruilong Li, Zan Gojcic, and Riccardo de Lutio (* Equal Contribution)
Published in @arxiv, 2025