OK, I found the original paper, and I read it. The methods that they use are terrible. As in, they don't even seem to know what they're doing, and they're not looking at what they think they're looking at. In fact, I believe that for the most part everything that they found is noise - which makes sense considering that their conclusion was that timeouts don't change anything. The discrepancy between their "expected value" of -0.07 points per minute for the Kings and the actual -0.0236 could potentially be explained by the fact that all of the scoring that happens before the first timeout of a quarter was effectively thrown out (again, methods problem). But we can qualitatively say that we finished quarters very poorly - time and again it seemed that we gave up big differentials at the end of the quarter - and of course all of that comes after the first timeout. This suggests that we actually did very well to open quarters.
As far as their -0.69 "average value" for the Kings goes, I can't imagine how in the world a number that large could come up, given their methods. It doesn't pass the smell test. I'd have to see their actual data analysis, but I'd bet they've got a bug somewhere, in addition to their shoddy methods.
I wouldn't be comfortable drawing any conclusions at all from this study, is what I'm saying.