Menu icoMenu232Dark icoCross32Dark
Retour
Alexandre KABIL
12 décembre 2021
EGFR Webinaires

ABONNEZ-VOUS À NOTRE NEWSLETTER

Abonnez-vous à notre newsletter
icoCross16Dark

EGFR Webinaires

Le chapitre Français d'Eurographics (EGFR) introduit cette année une nouvelle formule de webinaire. Nous proposons de diffuser les séminaires invités prestigieux de nos labos afin d'en faire profiter un public le plus large possible. Les spectateurs à distance auront l'occasion de poser des questions via le chat qui seront lues à l'orateur.

Les vidéos seront ensuite disponibles sur la chaine youtube d'EGFR https://www.youtube.com/channel/UCSg1Hbt5E9yaHD8JS8eDBLA

Si vous avez des orateurs invités prestigieux et que vous souhaitez en faire profiter tous les laboratoires, n'hésitez pas à nous (Georges Drettakis, Julie Digne, Nicolas Mellado) écrire.

Nous commençons cette série avec deux exposés au LIRIS (les détails des exposés et liens de connexion sont à la fin du mail)

- le mardi 14 Décembre à 10h: James Tompkin (Brown University)

- le mercredi 15 Décembre à 10h: Georges Drettakis (INRIA Sophia Antipolis)

Bien cordialement,

Georges Drettakis, Julie Digne, Nicolas Mellado

PS: Il se peut qu'il y ait quelques problèmes techniques pour ces premiers séminaires, merci de votre compréhension!

==========================================

Mardi 14 Décembre

Title: Scene Reconstruction across the Differentiable Rendering Spectrum

Abstract:

Scene reconstruction enables applications across visual computing, including media creation and editing, and capturing the real-world for virtual tourism and telecommunication. Advances in differentiable rendering for optimization- and learning-based reconstruction have increased quality but, as in 'forward' rendering, different methods have varying capabilities and computational costs that must be traded off against application needs. I will discuss our recent view reconstruction projects across the differentiable rendering spectrum, covering work on 6DoF video via image-based rendering for VR (visual.cs.brown.edu/matryodshka), depth reconstruction from sparse 3D points using differentiable splatting and diffusion (visual.cs.brown.edu/diffdiffdepth), and integrating time-of-flight imaging for monocular dynamic scene reconstruction (imaging.cs.cmu.edu/torf/). Finally, I will discuss how these trade-offs might inform how we can make differentiable rendering practical.

Bio:

James Tompkin (www.jamestompkin.com) is an assistant professor of Computer Science at Brown University. His research at the intersection of computer vision, computer graphics, and human-computer interaction helps develop new visual computing tools and experiences. His doctoral work at University College London on large-scale video processing and exploration techniques led to creative exhibition work in the Museum of the Moving Image in New York City. Postdoctoral work at Max-Planck-Institute for Informatics and Harvard University helped create new methods to edit content within images and videos. Recent research has developed new multi-camera reconstruction techniques for light field, 360, and time-of-flight imagery, and has developed new image editing and generation methods through learning explicit structured representations.

When: Dec 14, 2021 10:00 Paris

Topic: Webinaire EGFR 1 - LIRIS - James Tompkin

Please click the link below to join the webinar:

https://cnrs.zoom.us/j/96885950961?pwd=cGNjRHd5djNGSjd4QllTMzhEeC9FUT09

Passcode: 3ef9V1

=================================

Mercredi 15 Décembre

Title: Bringing Together Learning and Graphics: Rendering, 3D Representations and Synthetic Training Data

Abstract:

Deep learning methods such as image-to-image translation and more recently multi-layer perceptrons coupled with volume rendering, have been used to develop Neural Rendering solutions that synthesize or render new images with stunning visual quality. The former methods are often ``end-to-end'' operating entirely on 2D photos, and both are often trained exclusively on 2D image data. In this talk we discuss an alternative approach, that exploits the huge body of knowledge developed in traditional physically- and image-based Computer Graphics rendering and uses it hand-in-hand with such learning methods to effectively solve both inverse and forward problems in graphics. In particular, we discuss the importance of three key elements in these solutions: the use of explicit 3D data extracted from multi-view stereo, the use of rendering and rendered synthetic data for effective and efficient supervised learning. We illustrate these ideas on three recent projects: point-based neural rendering using learned features, free-viewpoint rendering of faces using generative adversarial networks and neural relighting and rendering of indoors scenes, and discuss some issues related to neural representations for rendering.

Bio:

George Drettakis graduated in Computer Science from the University of Crete, Greece, and obtained an M.Sc.and a Ph.D., (1994) at the University of Toronto, with E. Fiume. He was an ERCIM postdoctoral fellow in Grenoble, Barcelona and Bonn (94-95). He obtained a Inria researcher position in Grenoble in 1995, and his "Habilitation" at the University of Grenoble (1999). In 2000 he founded the REVES research group at INRIA Sophia-Antipolis, and now heads the follow-up group GRAPHDECO and is a INRIA Senior Researcher (full professor equivalent). He received the Eurographics (EG) Outstanding Technical Contributions award in 2007, is an EG fellow, and received the prestigious ERC Advanced Grant in 2019. He was an associate editor for ACM Transactions on Graphics, technical papers chair of SIGGRAPH Asia 2010, co-chair of Eurographics 2002 & 2008, associate editor and co-editor in chief of IEEE Trans. on Computer Graphics and Visualization. He has worked on many different topics in computer graphics, with an emphasis on rendering. He initially concentrated on lighting and shadow computation and subsequently worked on 3D audio, perceptually-driven algorithms, virtual reality and 3D interaction. He has worked on textures, weathering and perception for graphics and in recent years on image-based and neural rendering/relighting as well as deep material acquisition.

When: Dec 15, 2021 10:00 Paris

Topic: Webinaire EGFR 2 - LIRIS - Georges Drettakis

Please click the link below to join the webinar:

https://cnrs.zoom.us/j/98163187805?pwd=MmM5OUdGUkRsUDJGMGVKY0FmWW9BQT09

Passcode: q1cFNU

Découvrez davantage d'articles sur ces thèmes :
Recherche Réalité Virtuelle Technologies immersives Evénement Informatique Graphique
0 commentaire(s)
Aucun commentaire pour le moment.
Consultez également
Retour sur Laval Virtual en virtuel

Retour sur Laval Virtual en virtuel

Nous n’avons jamais rassemblé autant de participant au VRIC durant Laval Virtual !Les chiffres...

Simon RICHIR
27 avril 2020
CfP : SMI 2020 Fabrication and Sculpting Event

CfP : SMI 2020 Fabrication and Sculpting Event

The Fabrication and Sculpting Event (FASE) presents original research at the intersection of...

Alexandre KABIL
27 avril 2020
Articles de la conférence CHI 2020 disponibles

Articles de la conférence CHI 2020 disponibles

Les proceedings de la conférence CHI 2020 (Conference on Human Factors in Computing Systems)...

Alexandre KABIL
27 avril 2020
Call for Papers ISMAR 2020

Call for Papers ISMAR 2020

http://ismar20.org/call-for-papers/Note: all submissions are made via the PCS system. Please see...

Laure LEROY
27 avril 2020
Report – Journée du Groupe de Travail « Réalité Virtuelle » du GDR IG.RV

Report – Journée du Groupe de Travail « Réalité Virtuelle » du GDR IG.RV

Suite à la crise sanitaire, nous somme obligés de décaler le Groupe de travail réalité virtuelle...

Laure LEROY
27 avril 2020
Frontiers in Virtual Reality – Online Seminar Series

Frontiers in Virtual Reality – Online Seminar Series

The Editorial board of Frontiers in Virtual Reality is pleased to present a series of virtual...

Alexandre KABIL
10 mai 2020