The bpcs package performs Bayesian estimation of Paired Comparison models utilizing Stan, such as variations of the Bradley-Terry (Bradley and Terry 1952) and the Davidson models (Davidson 1970).

Package documentation and vignette articles can be found at:


From version >= 1.2.0, we rely upon the Stan software and the cmdstanr package (Gabry and Češnovar 2020).

You need to have a suitable toolchain and install cmdstan and cmdstanr. The easiest way is to follow the instruction presented in the cmdstanr package. See

After installing and configuring cmdstanr properly, you can install the development version of the bpcs package directly from the Github repository.


After installing, we load the package with:

#> This is the version 1.2.1 of the bpcs package. 
#> The bpcs package requires an installation of cmdstan and an appropriated toolchain.

Minimal example

The main function of the package is the bpc function. For the simple Bradley-Terry model, this function requires a specific type of data frame that contains:

  • Two columns containing the name of the contestants in the paired comparison
  • Two columns containing the score of each player OR one column containing the result of the match (0 if player0 won, 1 if player1 won, 2 if it was a tie)

We will utilize the tennis dataset available (Agresti 2003). The dataset can be seen below and is available as data(tennis_agresti):

dplyr::sample_n(tennis_agresti,10) %>% 
player0 player1 y id
Seles Sanchez 0 13
Sabatini Sanchez 1 41
Graf Sanchez 0 29
Graf Sabatini 0 16
Graf Sanchez 0 30
Seles Navratilova 1 10
Graf Sabatini 0 18
Graf Sabatini 1 21
Graf Sanchez 0 27
Seles Graf 1 5

Based on the scores of each contestant, the bpc function computes automatically who won the contest. Alternatively, you can provide a vector of who won if that is already available (for more information see ?bpc.

For the simple Bradley Terry Model we specify the model type as 'bt'. Here we hide the MCMC sampler chain messages for simplicity in the output.

m<-bpc(data = tennis_agresti, #datafrane
       player0 = 'player0', #name of the column for player 0
       player1 = 'player1', #name of the column for player 1
       result_column = 'y', #name of the column for the result of the match
       model_type = 'bt', #bt = Simple Bradley Terry model
       solve_ties = 'none', #there are no ties in the dataset so we can choose none here,
       show_chain_messages = F
#> Running MCMC with 4 parallel chains...
#> Chain 2 finished in 7.4 seconds.
#> Chain 1 finished in 7.6 seconds.
#> Chain 3 finished in 7.7 seconds.
#> Chain 4 finished in 7.7 seconds.
#> All 4 chains finished successfully.
#> Mean chain execution time: 7.6 seconds.
#> Total execution time: 8.3 seconds.

If cmdstanr is available and correctly working this function should sample the posterior distribution and create a bpc object.

To see a summary of the results we can run the summary function. Here we get three tables:

  1. The parameters of the model
  2. The probabilities of one player beating the other (this probability is based on the predictive posterior distribution)
  3. The rank of the player based on their abilities (this rank is based on the predictive posterior ranks).
#> Estimated baseline parameters with 95% HPD intervals:
#> Table: Parameters estimates
#> Parameter               Mean   Median   HPD_lower   HPD_higher
#> --------------------  ------  -------  ----------  -----------
#> lambda[Seles]           0.48     0.50       -2.43         3.12
#> lambda[Graf]            0.91     0.91       -2.01         3.46
#> lambda[Sabatini]       -0.38    -0.37       -3.12         2.45
#> lambda[Navratilova]     0.00     0.00       -2.84         2.71
#> lambda[Sanchez]        -1.14    -1.15       -3.99         1.59
#> * A higher lambda indicates a higher team ability
#> Posterior probabilities:
#> These probabilities are calculated from the predictive posterior distribution
#> for all player combinations
#> Table: Estimated posterior probabilites
#> i             j              i_beats_j   j_beats_i
#> ------------  ------------  ----------  ----------
#> Graf          Navratilova         0.68        0.32
#> Graf          Sabatini            0.81        0.19
#> Graf          Sanchez             0.83        0.17
#> Graf          Seles               0.62        0.38
#> Navratilova   Sabatini            0.53        0.47
#> Navratilova   Sanchez             0.70        0.30
#> Navratilova   Seles               0.43        0.57
#> Sabatini      Sanchez             0.70        0.30
#> Sabatini      Seles               0.29        0.71
#> Sanchez       Seles               0.24        0.76
#> Rank of the players' abilities:
#> The rank is based on the posterior rank distribution of the lambda parameter
#> Table: Estimated posterior ranks
#> Parameter      MedianRank   MeanRank   StdRank
#> ------------  -----------  ---------  --------
#> Graf                    1       1.35      0.59
#> Seles                   2       2.10      0.86
#> Navratilova             3       3.05      0.87
#> Sabatini                4       3.72      0.83
#> Sanchez                 5       4.78      0.50

Features of the bpcs package

  • Bayesian computation of different variations of the Bradley-Terry (including with home advantage, random effects and the generalized model).
  • Bayesian computation of different variations of the Davidson model to handle ties in the contest (including with home advantage, random effects and the generalized model).
  • Accepts a column with the results of the contest or the scores for each player.
  • Customize a normal prior distribution for every parameter.
  • Compute HDP interval for every parameter with the get_parameters_df function
  • Compute rank of the players with the get_rank_of_players_df function.
  • Compute all the probability combinations for one player beating the other with the get_probabilities_df function.
  • Convert aggregated tables of results into long format (one contest per row) with the expand_aggregated_data.
  • Obtain the posterior distribution for a single parameter of the model with the get_sample_posterior function or for all parameters with get_parameters_posterior.
  • Easy predictions using the predict function.
  • Although we have some publication-ready functions that use ggplot and kable, we do not reinforce any table or plotting library. All results can be obtained as data frames for easier plotting and creating tables. Just use the functions that end with _df
  • We reinforce the need to manually specify the model to be used.

Models available

  • Bradley-Terry (bt) (Bradley and Terry 1952)
  • Davidson model (davidson) for handling ties (Davidson 1970)

Options to add to the models:

  • Order effect (-ordereffect). E.g. for home advantage (Davidson and Beaver 1977)
  • Generalized models (-generalized). When we have contestant (players) specific predictors (Springall 1973)
  • Subject predictors (-subjectpredictors). When we have subject-specific predictors (Böckenholt 2001).
  • Intercept random effects (-U). For example, to compensate clustering or repeated measures (Böckenholt 2001)


  • Simple BT model: bt
  • Davidson model with random effects: davidson-U
  • Generalized BT model with order effect: bt-generalized-ordereffect


  • The model type should be first
  • The order of the options do not matter: bt-U-ordereffect is equivalent to bt-ordereffect-U
  • The - is mandatory


This package provides a series of small and self contained vignettes that exemplify the use of each model. In the vignettes, we also provide examples of code for data transformation, tables and plots.

Below we list all our vignettes with a short description:

  • Getting Started: This vignette shows a basic example on tennis competition data, covering how to run a Bradley-Terry model, MCMC diagnostics, posterior predictive values, ranking, predict new matches

  • Ties and home advantage: This vignette covers a soccer example from the Brazilian soccer league. Here, we first model the results using a Bradley-Terry model and the Davidson model to handle ties. Then, we extend both models to include for order effects, this allows us to investigate the home advantage in and without the presence of ties.

  • Bradley-Terry with random effects: This vignette covers the problem of ranking black-box optimization algorithms based on benchmarks. Since in benchmarking we often run the same optimization algorithm more than once with the same benchmark problem, we need to compensate for the repeated measures effect. We deal with this utilizing a simple Bradley-Terry model with random effects.

  • Paper “Bayesian Paired-Comparison with the bpcs package” (Arxiv:2101.11227).
    • This paper describes the theory and related work behind the presented models along with 3 reanalyses in behavioral sciences. Including random effects, order effect, a subject specific predictors
    • The reanalyses code can be found at:

Contributing and bugs

If you are interested you are welcome to contribute to the repository through pull requests.

We have a short contributing guide vignette.

If you find bugs, please report it in

Icon credits

  • Boxing gloves image by “surang” from “”
  • Hex Sticker created with the hexSticker package


Agresti, Alan. 2003. Categorical Data Analysis. Vol. 482. John Wiley & Sons.

Böckenholt, Ulf. 2001. “Hierarchical Modeling of Paired Comparison Data.” Psychological Methods 6 (1): 49.

Bradley, Ralph Allan, and Milton E Terry. 1952. “Rank Analysis of Incomplete Block Designs: I. The Method of Paired Comparisons.” Biometrika 39 (3/4): 324–45.

Davidson, Roger R. 1970. “On Extending the Bradley-Terry Model to Accommodate Ties in Paired Comparison Experiments.” Journal of the American Statistical Association 65 (329): 317–28.

Davidson, Roger R, and Robert J Beaver. 1977. “On Extending the Bradley-Terry Model to Incorporate Within-Pair Order Effects.” Biometrics, 693–702.

Gabry, Jonah, and Rok Češnovar. 2020. Cmdstanr: R Interface to ’CmdStan’.

Springall, A. 1973. “Response Surface Fitting Using a Generalization of the Bradley-Terry Paired Comparison Model.” Journal of the Royal Statistical Society: Series C (Applied Statistics) 22 (1): 59–68.