Hot Hands
Hot Hands was the original StockJungle's attempt to answer a hard question with public data. Who on the site is actually good, and how would we know? It worked better than the academic literature said it would, and the reasons it worked still apply today.
What Hot Hands was
A public scoreboard of every contributor's stock picks. Members made calls. The site recorded the call, the date, the price, and the reasoning. Months later the scoreboard told the truth without flattery. Members who picked well rose. Members who only made noise sank.
How contributors were scored
A pick was scored against the same security over the same window. Returns were measured net of an obvious benchmark, usually the S&P 500. The scoring was simple enough that any member could check it. Complexity in a rating system tends to be a place for the operator to hide. Hot Hands was deliberately plain.
Why rankings mattered
Public scoring changes behavior. A member who knows their picks will be visible writes more carefully. A member who knows their record can be checked stops chasing noise. The ranking did not need to be perfect. It only needed to be honest, public, and consistent. That alone improved the average quality of the calls on the site.
How picks were evaluated
Each pick had three timestamps. The call, the price at the time of the call, and the closing price at the end of the measurement window. The system did not adjust for events that did not exist at the time of the call. It did not give partial credit for being early. The point was to measure the quality of the actual decision, not the quality of the post hoc story.
Role in the community ecosystem
Hot Hands fed the rest of the platform. Strong scoring members drew more readers. Their ideas were more likely to surface in the Community Intelligence Fund's candidate pool. The rankings were not the only signal that mattered, but they were the only one a stranger could verify in five seconds. That is why they worked.
Lessons for modern investor tracking
Three lessons survived. First, the simplest scoring rule that captures the core question is usually the best. Second, public records improve behavior more than private ones. Third, a small group of consistently strong contributors is more useful than a large group of average ones. The current site applies all three to its named Portfolio Managers.