Methodology

Community Intelligence Fund Methodology

The interesting part of the Community Intelligence Fund was not the idea that a crowd could find good stocks. It was the discipline that turned a noisy stream of suggestions into a real, regulated portfolio. This page describes that discipline.

Updated April 21, 2026 ยท StockJungle archive

Overview

The fund operated in three layers. The community proposed candidate ideas. A small management team screened those candidates against fundamental and risk filters. A portfolio manager constructed and adjusted positions inside a published mandate. Every layer had explicit rules.

Source of stock ideas

Ideas came from registered members. Each idea required a price, a thesis, and a stated time horizon. Anonymous picks were not allowed. The reputation of the contributor traveled with the pick, which meant the system rewarded careful work over volume. A member who wrote ten thoughtful theses in a year carried more weight than one who wrote a hundred shallow ones.

Screening and selection

The first screen was mechanical. Liquidity below a floor was rejected. Companies with going concern issues, restated filings, or unresolved accounting flags were rejected. Sector exposure beyond the cap was trimmed at the candidate stage. The second screen was human. The managers read the thesis, checked the numbers, and either advanced the idea to the construction stage or sent feedback to the contributor explaining why not.

Portfolio construction

Position size was a manager decision, not a vote. The portfolio held a moderate number of names so that any one position could matter without dominating the result. New positions were generally entered over a few sessions to limit price impact. Trims and exits were treated with the same discipline. The mandate did not allow leverage and did not allow shorting.

Risk management

Three rules did most of the work. A maximum position size at cost. A maximum sector exposure. A drawdown trigger that forced a written review when a position fell beyond a threshold against the entry thesis. The drawdown rule was the most important one. It made sure the fund actually re-examined a thesis instead of letting it drift quietly to zero.

Disclosure and transparency

Holdings were posted on the public site at a cadence well above the regulatory minimum. The reasoning behind a buy or a trim was published in plain language. When the managers overrode a popular community pick, the page explained why. The point was not to advertise the fund. The point was to leave a record an investor could check.

Limits of the model

The community surfaced candidates well and sized positions poorly. The fund's rules acknowledged that. No vote, no matter how strong, could produce a position size on its own. Crowds also tend to over weight names that are already popular. The screening stage existed in part to push back against that bias. None of this is unique to the original fund. Any system that uses crowd input for capital allocation has to solve the same problem.

Lessons for current investors

Idea sourcing is a different problem than position sizing. Reputation works when it is honest and visible. A small, written set of risk rules outperforms a large, vague one. Disclosure changes behavior. The current StockJungle uses the same set of lessons in a different shape, where named Portfolio Managers replace a single fund and competition between styles replaces a single vote.

Related reading