For an apparel manufacturer, colour is not a creative indulgence. It’s a capital decision. Every colour we choose to stock in fabric form locks money into yarn, dyeing, processing, and warehouse space. If that colour doesn’t move, capital stays blocked for months. If we miss a colour that the market wants, we lose orders and, more importantly, speed.
This problem becomes sharper when you operate with fast turnaround expectations and also sell fabrics. Our customers expect consistency, but they also expect availability. We can’t afford to experiment wildly, and we can’t afford to be late. Colour decisions directly impact lead times, fill rates, and working capital cycles. So instead of asking “what colours are trending?”, we framed a more practical question: which colours are most likely to convert into real orders in the next 45 days, with the least production risk? The system we built exists to answer exactly that.
While tools exist that might be much more sophisticated for the same use case, our internal culture has always been towards self-reliance and so we took the painstaking effort of building our own system. This system, exists after multiple iterations with the the first 4-5 runs happening completely manually with people actually visiting sites, listing and categorizing colours, overtime we were able to automate and accelerate our core insight into a system, which for the first time showed real world applications for us.
Explaining the decisions and system from an apparel industry POV
From a sourcing and production standpoint, the core challenge is balancing consensus with flexibility. Colours that appear across many brands are safer to stock, but blindly following last season’s winners leads to stagnation. On the other hand, chasing novelty too aggressively increases dead inventory risk. Our system is designed to sit in the middle: conservative by default, but not blind to change.
We rely on two primary signals. The first is an external market signal derived from hundreds of apparel brands (507 Brands across the world). This reflects what the broader market is already committing inventory to. The second is our internal signal, based on sales, repeat orders, and inquiries from our own clients. Internal data is closer to revenue, but smaller in volume and sometimes influenced by our own past production decisions. External data is larger and cleaner, but slower to reflect local nuances. Combining both reduces blind spots.
Weights are intentionally asymmetric. External signals carry more weight because they represent wider market consensus, while internal signals act as a corrective layer. We further adjust weights at the brand level over time based on how predictive each source has been historically. The system does not decide what we produce on its own. It produces a ranked list with confidence levels, which is then reviewed by humans who understand seasonality, client context, and operational constraints.
Once we get a list of 15 Colours with a high probability for the coming 45 day period, we manually adjust tone so that it becomes easier and caters to larger print, embroidery and other styling techniques so as to make the fabric more dynamic. For reference, below was our systems predictions generated as on 1st October, 2025.
Rank | Colour Name | Hex Code | Logic / Signal Source | Status |
01 | Black |
| Core Internal Sales + External Consensus | High Volume |
02 | Wine Purple |
| Cultural Signal (Movie Posters) + D2C Growth | Trend Leader |
03 | Navy Blue |
| Stable D2C Consensus | Steady |
04 | Olive Green |
| Internal Inquiry + Manual Hunch | Growth |
05 | Charcoal Grey |
| High External Presence | Steady |
06 | Sage Green |
| Novelty Signal (Low Decay) | Emerging |
07 | Rust Orange |
| Seasonal Cultural Alignment | Peak |
08 | Off-White |
| Core Internal Sales | Steady |
09 | Teal |
| Cultural Signal Adjustment | Monitoring |
10 | Dusty Rose |
| External Market Saturation | Slow Decay |
11 | Beige |
| Neutral Consensus | Steady |
12 | Mustard |
| External Pull | Seasonal |
13 | Coffee Brown |
| Manual Hunch (High Confidence) | New Entry |
14 | Lavender |
| External Tail-end | Fading |
15 | Sky Blue |
| Base Consensus | Low Priority |
Explaining the technical implementation
The system runs in fixed 45-day cycles and is built as a deterministic pipeline rather than a real-time predictor. For data collection, we use Chromium-based headless browsing with Playwright. This allows us to reliably render modern D2C sites that rely heavily on client-side JavaScript. Each site has a controlled scraping configuration, and scraping is rate-limited and logged so failures, structural changes, or partial data are visible and traceable.
For colour extraction, we intentionally rely on Claude for vision analysis instead of traditional OpenCV or clustering pipelines. We tested classical approaches and found they struggle with real-world apparel imagery: models, backgrounds, lighting variation, and printed graphics introduce too much noise. Claude consistently performs better at isolating the actual fabric colour while ignoring prints, accessories, and backgrounds. This also simplifies the pipeline and lowers long-term maintenance cost compared to managing multiple brittle vision heuristics.
To avoid blind trust, Claude’s output is not accepted blindly. Each extracted hex code is validated against the image to ensure the colour actually exists in the pixel space. If the detected colour covers less than a minimal percentage of the garment area, the extraction is rerun with a stricter prompt. We log reruns and monitor their rate to detect degradation in image quality or prompt effectiveness. This gives us a balance between model accuracy and system-level safety.
Once colours are extracted, all hex codes are converted into LAB colour space, and similarity is calculated using CIEDE2000 Delta E. This matters because minor lighting differences can produce visually identical colours with different hex values. We cluster colours with ΔE ≤ 10 into a single perceptual shade, ensuring that “near-identical” blues or greys are treated as the same colour from a human perspective. This clustering happens at both site level and global aggregation level.
Each site produces its own ranked top list based on in-stock T-shirts, weighted by frequency and adjusted for catalogue padding. These site-level palettes are then combined into a global external ranking using site weights. Site weights are not static. They evolve slowly over cycles based on how often a site’s colours persist and align with genuine market demand. Weight changes are capped per cycle to prevent instability and overreaction to noise.
The external ranking is then combined with the internal top colours using a fixed 60:40 ratio. Internal data is decontaminated by tagging SKUs that were produced because of prior system recommendations and down-weighting those signals in future cycles. This prevents the system from validating its own past decisions instead of learning from the market.
On top of this core pipeline, we optionally layer cultural inputs and manual hunches, both with intentionally small weights. Cultural signals such as major movie or OTT releases are extracted separately and can provide a modest forward-looking nudge if they align closely with existing colours. Manual hunches are tracked, weighted lowest, and measured for accuracy over time. They are not there to override data, only to surface edge cases the system may otherwise ignore.
The final output is a ranked list of 15 colours with confidence bands and a transparent breakdown of contributing factors. It is reviewed manually before any production commitment. The system’s success is not measured by trend prediction, but by reduced dead inventory, faster fulfilment, and fewer colour-related production mistakes over multiple cycles.
