dreaming.ikim.nrw
Open in
urlscan Pro
75.2.60.5
Public Scan
URL:
https://dreaming.ikim.nrw/
Submission: On January 11 via api from US — Scanned from US
Submission: On January 11 via api from US — Scanned from US
Form analysis
0 forms found in the DOMText Content
SEARCH * Challenge Motivation * Challenge Task * How to participate * Challenge Timeline * Dataset * Team & sponsor * Contact * References * * Light Dark Automatic DREAMING CHALLENGE DIMINISHED REALITY FOR EMERGING APPLICATIONS IN MEDICINE THROUGH INPAINTING! MOTIVATION While Augmented Reality (AR) is extensively studied in medicine, it represents just one possibility for modifying the real environment. Other forms of Mediated Reality (MR) remain largely unexplored in the medical domain. Diminished Reality (DR) is such a modality. DR refers to the removal of real objects from the environment by virtually replacing them with their background [1]. Combined with AR, powerful MR environments can be created. Although of interest within the broader computer vision and graphics community, DR is not yet widely adopted in medicine [2]. However, DR holds huge potential in medical applications. For example, where constraints on space and intra-operative visibility exist, and the surgeons’ view of the patient is further obstructed by disruptive medical instruments or personnel [3], DR methods can provide the surgeon with an unobstructed view of the operation site. Recently, advancements in deep learning have paved the way for real-time DR applications, offering impressive imaging quality without the need for prior knowledge about the current scene [4]. Specifically, deep inpainting methods stand out as the most promising direction for DR [5, 6, 7]. We invite students, enthusiasts, and companies to join our innovative challenge. feel free to contact us today to if you have any question. CHALLENGE TASK The DREAM challenge focuses on implementing inpainting-based DR methods in oral and maxillofacial surgery. Algorithms shall fill regions of interest concealed by disruptive objects with a plausible background, such as the patient’s face and its surroundings. The facial region is particularly interesting for medical DR, due to its complex anatomy and variety through age, gender and ethnicity. Therefore, we will provide a dataset consisting of synthetic, but photorealistic, surgery scenes focusing on patient faces, with obstructions from medical instruments and hands holding them. These scenes are generated by rendering highly realistic humans together with 3D-scanned medical instruments in a simulated operating room (OR) setting. HOW TO PARTICIPATE Challenge registration, submissions and evaluation will be organized via the grand-challenge.org platform. Challenge participants will be able to submit their algorithms as Docker containers. More information will follow soon! The DREAMING challenge will be held in conjunction with the ISBI 2024 conference on the 27th of May in Athens. Participants who submit a valid paper describing their algorithm will have the opportunity to present their work as a poster or oral at the conference, and their papers will be part of the ISBI proceedings. Challenge Summary Paper! We plan to publish a challenge summary paper after the challenge in a top-tier journal. Challenge participants will be invited as co-authors. Best paper & best submissions will receive awards! Questions? contact or e-mail me directly! CHALLENGE TIMELINE 8th January 2024: Initiation of challenge. First subset of training & validation data available. 22nd January 2024: Second subset of training & validation data available. 29th January 2024: Challenge platform available on grand-challenge.org. Full training & validation data available. Method evaluation opens. 6th April 2024: Paper submission deadline. 20th April 2024: Notifications of paper reviews. 27th April 2024: Camera-ready paper submission deadline; Method evaluation closes. 6th May 2024: Final notifications (poster vs oral). 27th May 2024: Conference – Announcement of winners. All deadlines are 23:59 Pacific Time! DATASET The most recent DREAMING dataset can be found on Zenodo: dataset link ORGANIZING TEAM Dr. Christina Gsaxner, Dr. Shohei Mori, Gijs Luijten, Viet Duc Vu, Timo van Meegdenburg, Prof. Gabriele A. Krombach, Prof. Jens Kleesiek, Dr. Ulrich Eck, Prof. Nassir Navab, Yan Guo, Prof. Xiaojun Chen, Prof. Frank Hölzle, Dr. Behrus Puladi, Prof. Jan Egger -------------------------------------------------------------------------------- Sponsor(s) CONTACT * gsaxner@tugraz.at REFERENCES REFERENCES 1. Mori, S., Ikeda, S., & Saito, H. (2017). A survey of diminished reality: Techniques for visually concealing, eliminating, and seeing through real objects. IPSJ Transactions on Computer Vision and Applications, 9(1), 1-14. Article 2. Ienaga, N., Bork, F., Meerits, S., Mori, S., Fallavollita, P., Navab, N., & Saito, H. (2016, September). First deployment of diminished reality for anatomy education. In ISMAR-Adjunct (pp. 294-296). IEEE. Article 3. Egger, J., & Chen, X. (Eds.). (2021). Computer-Aided Oral and Maxillofacial Surgery: Developments, Applications, and Future Perspectives. Academic Press. Book 4. Gsaxner, C., Mori, S., Schmalstieg, D., Egger, J., Paar, G., Bailer, W. & Kalkofen, D. (2023). DeepDR: Deep Structure-Aware RGB-D Inpainting for Diminished Reality. arXiv preprint arXiv: 2312.00532. Preprint 5. Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., & Efros, A. A. (2016). Context encoders: Feature learning by inpainting. In CVPR (pp. 2536-2544). Article 6. Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., & Huang, T. S. (2019). Free-form image inpainting with gated convolution. In ICCV (pp. 4471-4480). Article 7. Kim, D., Woo, S., Lee, J. Y., & Kweon, I. S. (2019). Deep video inpainting. In CVPR (pp. 5792-5801). Article Imprint Data Protection © 2024 Institute for Artificial Intelligence in Medicine. This work is licensed under CC BY NC ND 4.0. Published with Wowchemy — the free, open source website builder that empowers creators.