-Despite the large and growing number of approaches to counterfactual search, there have been surprisingly few benchmark studies that compare different methodologies. This may be partially due to limited software availability in this space. Recent work has started to address this gap: firstly, @deoliveira2021framework run a large benchmarking study using different algorithmic approaches and numerous tabular datasets; secondly, @pawelczyk2021carla introduce a Python framework---CARLA---that can be used to apply and benchmark different methodologies; finally, [`CounterfactualExplanations.jl`](https://github.com/pat-alt/CounterfactualExplanations.jl) [@altmeyer2022counterfactualexplanations] provides an extensible, fast and language-agnostic implementation in Julia. Since the experiments presented here involve extensive simulations, we have relied on and extended the Julia implementation due to the associated performance benefits. In particular, we have built a framework on top of [`CounterfactualExplanations.jl`](https://github.com/pat-alt/CounterfactualExplanations.jl) that extends the functionality from static benchmarks to simulation experiments: [`AlgorithmicRecourseDynamics.jl`]((https://anonymous.4open.science/r/AlgorithmicRecourseDynamics/README.md))^[The code is available from the following anonymized Github repository: [https://anonymous.4open.science/r/AlgorithmicRecourseDynamics/README.md](https://anonymous.4open.science/r/AlgorithmicRecourseDynamics/README.md).]. The core concepts implemented in that package reflect what is presented in Section \@ref(method-2) of this paper.
0 commit comments