What Happens to a Dataset Transformed by a Projection-based Concept Removal Method?
Paper in proceeding, 2024

We investigate the behavior of methods using linear projections to remove information about a concept from a language representation, and we consider the question of what happens to a dataset transformed by such a method. A theoretical analysis and experiments on real-world and synthetic data show that these methods inject strong statistical dependencies into the transformed datasets. After applying such a method, the representation space is highly structured: in the transformed space, an instance tends to be located near instances of the opposite label. As a consequence, the original labeling can in some cases be reconstructed by applying an anti-clustering method.

natural language processing

representation learning

invariant representation learning

neural representation

Author

Richard Johansson

Chalmers, Computer Science and Engineering (Chalmers), Data Science

Proceedings - International Conference on Computational Linguistics, COLING

29512093 (ISSN)

17486-17492
978-2-493814-10-4 (ISBN)

2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Torino, Italy,

Subject Categories

Language Technology (Computational Linguistics)

More information

Created

12/5/2024