ICE (Impact, Confidence, Effort) is a great friend to product managers. It helps cut through differing opinions and provides an objective way to order a product roadmap.
But, like every framework, ICE has its limitations.
Some situations just don’t fit easily into it.
🚀 Moonshot Initiatives (Low Confidence [C in ICE] isn’t always bad) – 
Innovative companies need to invest in high-impact, low-confidence projects (think of groundbreaking projects). While many of these end up failing, not pushing the frontiers or missing out on the next big thing could be a bigger risk.
⏳ Long-Term Strategic Bets (Impact [I in ICE] isn’t always short-term)
 – Some projects, like building an in-house LLM from scratch or integrating a recent acquisition can take years to show results. ICE, which favours quick wins, may not always recognise their value.
⚡ Fast-Moving Markets – 
Industries like Quick Commerce or GenAI evolve weekly, making it tough to keep ICE scoring consistent. What seemed impactful last month might not hold the same weight today. For example, think of the impact that Deepseek would have had in the incumbent LLMs’ roadmaps!. In fact, I have seen ICE looked down upon in such volatile environments.
⚖️ Legal & Compliance Work
 – Initiatives ensuring regulatory compliance are non-negotiable. The risk of non-compliance (fines, bans) outweighs any debate on effort or impact.
🏅 Hard to measure but perceivable constructs (E.g. Brand)
 – People who have tried to measure brand perception know how difficult it is to measure. Trying to measure the movements in brand perception and attributing would be much more difficult. Initiatives to control damage to your brand reputation (e.g. A pricing change causing a social media backlash) are hard to measure but critical to address—something ICE struggles to quantify effectively.
These exceptions need to be considered to use ICE effectively to work around its limitations.
What other scenarios have you encountered where ICE didn’t quite work? Share them in the comments!