1 |
Based on past discussions in the forum, significant work went into trying and testing several options for balancing and rating algorithms. Everybody has ideas, but trying them and checking they (don't) work is very hard and time consuming. I really do hope that someone (you!) has a great idea, but that has to be proven on existing data.
|
1 |
Based on past discussions in the forum, significant work went into trying and testing several options for balancing and rating algorithms. Everybody has ideas, but trying them and checking they (don't) work is very hard and time consuming. I really do hope that someone (you!) has a great idea, but that has to be proven on existing data.
|
2 |
\n
|
2 |
\n
|
3 |
The way I see it, ideally we would have a data set on which ideas can be easily tested (or is there such a thing already?). You could of course scrape the replay list, but that is even more work.
|
3 |
The way I see it, ideally we would have a data set on which ideas can be easily tested (or is there such a thing already?). You could of course scrape the replay list, but that is even more work.
|
4 |
\n
|
4 |
\n
|
5 |
As an anecdote I think I remember as many games when people say "balance is shit, no chance of winning" and they win as the ones in which they say that and they loose.
|
5 |
As an anecdote I think I remember as many games when people say "balance is shit, no chance of winning" and they win as the ones in which they say that and they loose.
|
|
|
6 |
\n
|
|
|
7 |
[quote]Do you know how to do this?[/quote] One idea would be to use N games to predict the result of the N+1 game, and do that for all games. Then for the games where the algorithm under test gave different balance, check if the algorithm prediction is correct or not.
|