About
Advanced Artificial Intelligence (AI) techniques based on deep representations, such as GPT and Stable Diffusion, have demonstrated exceptional capabilities in analyzing vast amounts of data and generating coherent responses from unstructured data. They achieve this through sophisticated architectures that capture subtle relationships and dependencies. However, these models predominantly identify dependencies rather than establishing and making use of causal relationships. This can lead to potential spurious correlations and algorithmic bias, limiting the models’ interpretability and trustworthiness. In contrast, traditional causal discovery methods aim to identify causal relationships within observed data in an unsupervised manner. While these methods show promising results in scenarios with fully observed data, they struggle to handle complex real-world situations where causal effects occur in latent spaces when handling images, videos, and possibly text.
Recently, causal representation learning (CRL) has made significant progress in addressing the aforementioned challenges, demonstrating great potential in understanding the causal relationships underlying observed data. These techniques are expected to enable researchers to identify latent causal variables and discern the relationships among them, which provides an efficient way to disentangle representations and enhance the reliability and interpretability of models. The goal of this workshop is to explore the challenges and opportunities in this field, discuss recent progress, and identify open questions, and provide a platform to inpire cross-disciplinary collaborations. This workshop will cover both theoretical and applied aspects of CRL, including, but not limited to, the following topics:
-
Theory of causal representation learning
-
Causal representation learning models
-
Causal discovery with latent variables
-
Causal generative models
-
Causal Foundation Models
-
Applications of causal representation learning, such as in biology, economics, image/video analysis, and LLMs
-
Benchmarking causal representation learning
Invited Speakers (Ranked by the last name)
Panelists (Ranked by the last name)
Location
East Exhibition Hall C
Schedule
- 8:30-8:45 AM: Welcome remarks
- 8:45-9:15 AM: Invited Talk 1 by Arthur Gretton (Title: Learning to Act in Noisy Contexts Using Deep Proxy Learning)
- 9:15-9:45 AM: Invited Talk 2 by Bernhard Schölkopf (Title: What Is a Causal Representation?)
- 9:45-10:00 AM: Coffee Break
- 10:00-10:30 AM: Invited Talk 3 by Cheng Zhang (Title: Towards Causal Foundation Model)
- 10:30-10:45 AM: Contributed Talk 1 (Title: On Domain Generalization Datasets as Proxy Benchmarks for Causal Representation Learning)
- 10:45-11:00 AM: Contributed Talk 2 (Title: Uncovering Latent Causal Structures from Spatiotemporal Data)
- 11:00-12:00 PM: Poster Session
- 12:00-2:00 PM: Lunch Break
- 2:00-2:30 PM: Invited Talk 4 by Yan Liu (Title: Frontiers of Counterfactual Outcome Estimation in Time Series )
- 2:30-3:00 PM: Invited Talk 5 by Ilya Shpitser (Title: Missing Data with ? and 0 Missingness Tokens: Identification and Estimation)
- 3:00-3:30 PM: Coffee Break
- 3:30-3:45 PM: Contributed Talk 3 (Title: A Shadow Variable Approach to Causal Decision Making with One-sided Feedback)
- 3:45-4:00 PM: Contributed Talk 4 (Title: Uncertainty-Aware Optimal Treatment Selection for Clinical Time Series)
- 4:00-4:50 PM: Panel Discussion
- 4:50-5:00 PM: Closing Remarks
Important Dates (Anywhere on Earth,TBD)
- Workshop Papers Submission:
September 10, 2024October 2, 2024 (Welcome your ICLR submissions!) - Acceptance Notification: October 9, 2024
- Camera-ready Deadline and Copyright Form: October 23, 2024
- Workshop Date: December 15, 2024
Organizers:
For any question, please contact thecrlcommunity@gmail.com.