I wrote a post recently on the topic of releasing fast, it was an opinion piece and it received a few comments stating that my suggestion of releasing fast could actually be detrimental to a developer in terms of ratings. Well, when that happened my curiosity was piqued and I thought it would be interesting to see if there was any hard evidence, one way or the other, as to whether or not the length of time between releases has an affect on App Store ratings.
After getting the comments on my last post, I was willing to believe that releasing fast could be detrimental. The argument that taking the time to make sure your app is bug-free is a logical one. So I was a bit surprised when the data started to unfold. The results of my data collection and analysis showed that releasing fast has a direct impact on improving App Store ratings.
Here's what I did, I picked 5 broad categories; Social Networking, Lifestyle, Tools, Games and Music. Within those categories I first picked various apps that I new were in the top 10, and selected a total of 29 to start (here's the chart of all of the apps I analyzed).
With my list of apps in hand, I checked around for sites that could give me historical release dates as well as lifetime and current App Store ratings. After a quick searchApp Annieappeared to give me everything I needed.The release dates were not always perfect and rarely went back to launch, but App Annie generally gave me more than 20 data points for each app, which I figured was good enough to give a reliable track record.
Once I had the historical release dates I copied them into Excel, Plotly and Datawrapper. Once in Excel and Datawrapper, I started to play with the data. It is nice that dates are directly associated with a numerical value, where January 1, 1900 = 1 and Feb 22, 2014 = 41692. That way, all I had to do was subtract one release date from another to get the time between releases. I did this for all of the apps and then averaged those numbers. I also associated the current App Store rating and the lifetime App Store rating with each app.
This is about the point where I figured everything I was doing was wrong. I was concerned that because the apps were picked from a select group they were far from random, meaning the data was biased. So at that point I decided to randomize the selection of observations. I went to the App Store and randomly selected five more apps from each category. Luckily all of the apps that were randomly selected were still in active development, with releases within the last couple of months and generally within the last 30 days. I added in the 25 newly selected apps and started to look at the data.
The first thing I did was check out how often the current App Store rating was better or worse than the lifetime App Store rating. I did this with a quick and dirty IF statement and decided to err on the negative side if there was a tie. At first blush, without doing anything else to the data, there were more apps that had better current ratings than lifetime ratings than I thought there would be. Even with a tie going in favor of the negative side, there were still 26 out of 54 apps that had better current ratings than lifetime ratings. This seemed, right away, to fly in the face of the argument that the more you release the worse your App Store rating will be.
Now, I'm at least somewhat of a fan of statistics (don't ask why), so the next thing I decided to do was some regression analysis. This is where the data really shone through. Doing two regressions, one where lifetime App Store rating was the dependent variable and one where current App Store rating was the dependent and with average days between releases being the independent variable in both cases, there was clearly a correlation. This correlation showed that the faster a team releases an app, the better its App Store rating will be in almost all cases. Now in both scenarios the Adjusted R Squared was not massive, for lifetime App Store rating it was approximately 0.03 and for current App Store rating it was approximately 0.1. This is not a big Adjusted R Squared, but it is telling us that the time between releases is affecting rating. Now, far more important to me was the fact that the T-Stat for both was approximately 30, meaning the data has significance.
These results tell me that while the average days between releases don't explain a huge portion of an App Store rating, the portion that it does explain, it has a big affect on. What this suggests to me is that it is most likely anecdotal that speedy releases negatively affect App Store ratings. One of the things that is clear is that we as people tend to remember the negative more than the positive. The collective of developers has probably each, at one point or another, had a release that they rushed out because they wanted to get a new feature to people as soon as possible and mistakenly left a bug in the code that ended up upsetting users. This can obviously lead to bad ratings and then developers as a whole need to find reasons for the bad ratings. I think this is where people tend to fall back on the argument that fast releases lead to bugs, which leads to bad ratings.
When I look at the data I see that there is a positive correlation between releasing fast and getting reviewed well. There is of course (as there always is) an exception to this rule. If the data is broken down into the sub-categories, we can see one category that doesn't have an improvement in App Store ratings as time between releases gets shorter. This category is Games. To me, as I think about it, this seems like a likely candidate to be an exception. Game design is hard. That much is guaranteed, and games have to be well thought out at all stages of development. The best games are ones that the teams have taken the time to get the mechanics right. When the building and updating of game mechanics is rushed, it is clearer than in any other category.
Where causation actually lies, I cannot tell yet. It could be that only good teams are able to release fast and so good teams are the causal factor in both fast releases and positive App Store ratings. It could also be that even though I tried to randomize the selection I lucked out in finding apps that proved this point, and the findings are wrong.
But as I look at the data and these results I think there is solid proof for the hypothesis that releasing fast is good for your app. I will continue to investigate how App Store releases affect apps, and I encourage anyone else to dive into the data and post your findings and comments below.