About
While advanced AI techniques based on deep representation learning, including GPT and LLaMa, excel in data analysis, they often identify correlations rather than causation. This can lead to potential spurious correlations and algorithmic bias, limiting the models’ interpretability and trustworthiness. Recently, causal representations have shown great potential in understanding data generation process and exploring the “intelligence” in deep learning by investigating underlying causal relationships. Incorporating causal representation learning (CRL) enables researchers and practitioners to better understand how features contribute to outcomes, identifying potential sources of bias or confounding that could affect decision-making and generalization. To help the community understand the challenges and opportunities, discuss recent progress, and identify outstanding open questions, we hold the several CRL workshop.
Follow Us
Please follow us on X for the latest news, or join us on the Slack for active discussions.