Chaos Engineering is an approach to resilience benchmarking for distributed architectures. Existing approaches for experiment selection include random fault injections and lineage driven fault injection, however, while quite exhaustive, they do lack in the area of efficiency. The ORCAS project introduces a new approach to resilience benchmarking for microservice architectures by leveraging knowledge about the architecture, as well as the simulation of experiments outside of the production environment. By prioritizing certain experiments, instead of choosing at random, the goal is to provide an efficient alternative to the existing approaches. As one of the main components of the ORCAS project, the decision engine is responsible for experiment selection based on the inputs of the architecture and previous experiment results. An outline for integrating the results into this framework is presented. This thesis explores algorithms suitable for efficient experiment selection in the decision engine, and evaluates these algorithms. Algorithms presented are based on Bayesian networks, reinforcement learning, artificial neural networks, and multilevel feedback queues. For evaluation, the algorithms are evaluated based on their efficiency and fault detection rate. For this, five versions of an example architecture have been used. The five architectures are grouped into three equivalence groups, based on their level of resilience. One candidate of each group is then used for evaluation. The results show that reinforcement learning is a suitable approach for the problem of efficiently selecting fault injections, since it does not require a training phase. In any case, almost all algorithms improve on the results of random fault injection selection.