Our approach to AI
MARKET INTELLIGENCE
By now, probably every single one of us has used ChatGPT at least once and at least once found it impressive, enough to spur a certain kind of existential question: “Am I still needed now that there’s this kind of tool around?“.
However, the reality as of 2025 is that Generative AI tools (particularly those based on Large Language Models, the technology behind ChatGPT) still suffer from a number of invalidating flaws. Aside from the relatively well-known and noticeable habit of occasionally making stuff up (the so-called “hallucinations”), there are several more subtle weaknesses.
Chief among them is the tendency to gravitate towards generic outputs: notice how most Artificial Intelligence-written text looks similar, to the point of being easily recognizable; however it’s not just the style, but also the content that is affected: key insights and original points tend to get flattened out, diluted into a sea of intricate discourse that, at the end of the day, translate to very little actual information conveyed.
Another point concerns source selection. This is particularly relevant, even (and especially so) for more advanced tools like the “Deep Research” functionalities offered by several providers like ChatGPT or Perplexity. These are impressive tools with the ability to search very deep into the web, summarising the findings from hundreds of webpages.
However, their very nature begets their limits: the fact that these tools only have access to what’s freely available online represents a critical obstacle for research.
But more significantly, a crucial shortcoming is the inability of current models to form a proper judgement of source authority: it’s not unusual to see them over-relying on individual sources of dubious provenance, which end up in biased extrapolations, diverged from reality.
Perhaps most dangerously, AI tools create an illusion of comprehensive coverage. Their findings are presented with a certain degree of confidence and structure, which often makes users unaware of what might be missing.
As an example, we recently experienced this firsthand when using ChatGPT’s Deep Research to compile a list of global stores for a major brand. The results seemed thorough and well-structured, until we cross-checked them manually to discover that over half the locations were missing. The AI had no way to know what it didn’t know, and crucially, no way to tell us about these gaps.
Instead, what we did was in fact using AI, but with a different approach: not to get the goal directly, but as assistance to pave the way towards it. In practice, that meant using AI to help us reverse-engineer the website structure, so that we could extract the stores’ information comprehensively, solving the problem efficiently.
Consider how we approach cooking: anyone can buy a pre-made meal, just as anyone can prompt ChatGPT for a market analysis task. But professional chefs know which ingredients to select, how to combine them, how to intervene when something goes wrong, and how to adjust for specific situations.
This is an important distinction to be made: we don’t eschew AI altogether – in fact, at CSIL, we are implementing an internal AI project that will empower our researchers to more easily access and organize decades of our accumulated insights. However, we do not believe in shortcuts: reliable insight takes judgment, experience, and an understanding of what the data says and doesn’t say. AI can certainly assist and augment, but it cannot replace the responsibility of recognizing what truly matters.
Steering clear of pre-packaged and shallow outcomes, at CSIL, we believe in our value of delivering high-quality discernment, grounded in our consolidated methodology, deep industry expertise, and access to firsthand insights gathered directly from market players.
Source: CSIL World Furniture Magazine #07, September 2025
- Browse the Magazine https://www.worldfurnitureonline.com/editorial/world-furniture-magazine-07-september-2025/
- Discover all issues at https://www.worldfurnitureonline.com/magazine
