We are now at 11 days of the 22 player limit experiment. It looks promising, however player data is too noisy to tell if there would be a long term effect. I am confident at this point in continuing the experiment because the data seems to indicate that the effect has not been negative. We may have had an anomalously good past week of games, or there could be something to some combination of the smaller room sizes and waiting list change. One surprise is that additional rooms are being seeded without the !split command.
In terms of player feedback, the positive feedback seems to outweigh the negative. Some people say they much prefer the 8v8 to 11v11 sized games, others say that they only way to play 16v16. I am going to side with whichever option seems the best for long term growth. I like how, so far, the teams hosts have been running for a larger proportion of each day. This seems important for attracting players. Although I would not rule out running 32 player games on particular days.
Here is some data.

The above is a timeline of the last month. You'll have to click on it to view a a larger version. It shows the player counts in teams battles that ran in the past month. The battles are stacked so that the total players ingame can be read off the graph. The dotted line is the total spectators across all battles. For example, the peak on Monday the 14th corresponds to
B2219014 18 on Fields_Of_Isis,
B2219005 14 on Tabula-v6.1 and
B2219004 6 on Altair_Crossing_V4.

The numbers do not correspond perfectly because the screenshot shows the players in the room while the graph shows players in game. There is another technicality in that the graph is grouped by 10-minute period, which is necessary to avoid spikes from the ~5 minutes downtime between games. The peak is capturing
B2219013 20 on Tau12, which started after the Tabula game.

The top of the timeline graph has a summary for each day. The main number is player minutes i.e. the total time spent in team games by players each day. This is calculated directly from game durations, so the 10-minute period of the graph has no impact. The things I noted last time are holding steady.
-
There seem to be more player minutes, on average, since the start of the experiment.
-
There are more games overall.
-
There are about as many games with at least 16 players.
Eyeballing the last month is a decent sanity check, but it is tricky to draw anything concrete from it. Really, it is difficult to say anything definitive only 11 days into the experiment. But that hasn't stopped me from making more graphs.

The graph above compares the last 11 days (in red) to every other day this year (in green). The box plots are the median and 25th and 75th percentiles. All it really says is that the two weeks prior to the experiment were not abnormally bad. The results so far are an improvement, on average, on the first six months of the year.

Here is an increasingly complicated way to validate the eyeballing of the timeline. This graph shows how much of each day was spent above a set of player count thresholds (8, 16, 22 and 32). This uses the 10-minute batching, so is prone to a bit of weirdness, but it should average out. It doesn't say a whole lot, and mostly just supports the idea that the player distribution spread out across the day, and possibly flattened a little. Although at this level of disaggregation, many of the data points are well within the old distribution.

Finally, this graph is a companion to the one above. It shows the uptime of games of particular sizes during the day, rather than the total playerbase. To see what is going on, note that the red dots in the "size ≥ 32" categories are all sitting at zero. The graph supports conclusions such as "it wasn't harder to find 8v8s during the past 11 days". The baseline data is also quite interesting, as it quantifies what we would be giving up with a smaller room size limit. The uptime of games with at least 22 players ("huge" games) is only around 3-4 hours per day, over the week. It also says that 32 player games were only particularly common on Friday, Saturday and Sunday.

In terms of player numbers, the experiment is the last dot on the weekly players graph. This graph shows the daily number of players that played any online game, averaged over the week. It is too bumpy to say much, especially short term, which is why I am resorting to collecting team game frequency and size data. The hope is that having more games for more of the day is good, and will lead to more players.