Curious, what’s is the standard error when you are using the entire population and not a sample. Been a while since my statistics class but in our case we used the entire population of All-Stars going back to 1948. I think the error goes away as the error is a sampling error and we aren’t sampling, we are using the universe.
You're actually running into a difference between descriptive statistics and predictive statistics. From a descriptive point of view, if you have the entire population then the standard error really is meaningless. It won't calculate to zero, but there's no uncertainty about what the mean is (from a descriptive point of view) if you have the whole population.
But clearly a goal here is to apply the knowledge we have in order to make predictions about future drafts. In that sense we don't have all the data, because some of it hasn't happened yet, and we're using our sample (drafts that have already happened) to estimate the true values of our population (all drafts past and future).
Another interesting question is has the ability to find an all star in later rounds changed over time and if so in what direction. I suppose that would require grouping the years in some increment. Curious as to your thoughts on the appropriate years grouping. Guessing 15 year increments might be best.
Grouping in increments works a little, but it's not very easy to make a statistical case because you start running short of data points. That's the kind of thing where a linear regression on the individual (ungrouped) data points is often useful, but with data like this (lots of zeros punctuated with values here and there) I don't think linear regression does a very good job. You could try in 10-15 year increments to see what it looks like, but you may have trouble convincing folks (myself included) that any small trend is real.