The adoption of AI by professionals has been slowed down by the lack of control experts feel over their work when using AI. When AI is used simply to create initial ideas and sketches of marketing materials, their unpredictability does not threaten the integrity of work. In contrast, the growing use of unpredictable and opaque AI systems in legal and academic work as well as in public administration raises significant concerns. Even in software engineering, the loss of control over programming code created by agentic systems has created a split among optimistist and sceptics. The bottom line is, without predictability and transparency of AI systems, you and your team will lack a sense of control over the work. As a result, AI can feel like a junior colleague you cannot trust. Instead of accelerating your work, you end up spending time double-checking the results.
Creating a high-quality, transparent and trustworthy AI solution for knowledge work was our guiding thought then developing Skimle. We wanted to design a tool that allows effortless transparency. As part of that, we are pioneering the concept of two-way transparency: in addition to linking the AI outputs directly to the source materials, we also transparently show which parts of the source materials were used for different outputs.
The unique opaqueness of AI
Traditional software was transparent because you would fairly quickly understand how it worked: rules, logic, and workflows were all fixed. The contemporary AI, large language models, lack this predictability of functioning. The AI model processes information in a far more complex way than regular software, making it impossible to fully understand its behavior. Even if we had a technology to fully trace the internal functioning of a model with hundreds of billions of parameters, there is no way to communicate that in a useful form. Its internal workings are as impenetrable as your coworkers… or your spouse...
This lack of predictability creates a need for different sort of control. With co-workers, the transparency comes from predictability through familiarity: with time, you can learn to predict how the models work. Yet, this is often not a practical approach. Not only does it take time for professionals to learn to know each of the models and AI tools they use, but as models improve all the time, their behaviors are likely to change. The issue is the same as with organizations: its great to work with people you know, but in many cases the business demands push us to work with people we don't know or have time to get to really know. To alleviate this challenge, organizations create predictable roles and transparency into inputs and outputs.
With Skimle, the role of AI system is fixed: it is not a set of genetic AI agents or a chatbot, but an analysis platform that turns your documents into knowledge structures. This has allowed us to engineer radically greater transparency that gives the sense of control over their work back to professionals.
Transparency #1: From outputs to the source
Skimle systematically traces all insights back to original documents and uses regular software to verify that those quotes are exactly correct. In fact, even though AI models sometimes fixes the typos when extracting quotes, Skimle extracts the original text from the document exactly as-is.
Our user interface, designed for data analysis, allows you to:
- Click any insight anywhere to see the exact quotes it is based on
- Click any quote to see it in the context of the original document
- Understand precisely which documents relate to each category of insights
You don't have to search through original documents to find evidence that the AI did not hallucinate. The user interface is designed to minimize the time it takes an expert to navigate between original documents and high-level abstractions, giving a sense of control that they can quickly understand the materials they are dealing with.
Transparency #2: From original document to insights
We found that knowing what data underwrites AI answers is an important feature, but not enough. Many basic chatbots and RAG systems for analysis provide the first form of transparency, although typially with a less polished user experience than Skimle. But often it is not enough to know what AI based its answer on. In many cases, being control means that you know that AI didn't miss anything important.
From working with lawyers and public sector policy analysts, we came to understand that a second form of transparency can be equally important: the ability to open an important document and quickly see what AI picked up and what it did not.
Skimle allows you to look at any original document and see the text that was picked up as quotes for insights as highlighted. Clicking the highlighted text, you can see which insight(s) it relates to. In reverse, all the text on white background was not picked up by AI. It is extremely rare for Skimle to miss an important quote relevant to the analysis user has specified. The ability to quickly glance through documents to see what was picked up and what not provides the sense of control you need to stand behind the decisions you make and the reports you author.
With Skimle, you can also manually change the category structures and categorisation of individual passages, giving you even more control on the analysis.
Our vision for agentic AI: Transparency of AI work
This two-way foundation enables an even more ambitious future.
Skimle is evolving to be an integral part of agentic workflows:
- Track how your documents are used across multiple AI agents
- Show you where and when each passage influenced an output
- Provide a transparent chain from a line in a PDF … to an AI-generated insight … to a workflow decision or action
As increasing amount of work gets automated, we believe at augmentation. While real expertise cannot be replaced by algorithms, peak expert performance increasingly requires help from AI agents. Our vision for Skimle is to minimize the effort required from experts to initiate, control, and review knowledge work done by agentic AI systems around textual data.
Knowledge workers will only use AI when they feel in control. This requires predictability and effortless transparency. Predictability can only come from a well-defined and stable role the AI system plays in the expert’s workflow, as the opaqueness of functioning and the rapid pace of change makes it challenging to truly understand how AI works. Skimle will not create you a workout program or help you prepare for your dinner party. We structure knowledge for you so that you can develop a deep understand of interviews or other documents and make impactful decisions and take action. The hierarchical data structure has a fixed form that matches our user interface design.
Transparency is not just about AI-provided references to websites and documents that nobody ever clicks. It requires effortless access the original data underwriting AI-generated insights and the ability to revisit the most important documents to gain confidence that text ignored by AI is indeed irrelevant. Over time, experts will learn to trust Skimle to surface the important knowledge from documents. But just like with trusted assistants, it is good to know that when needed you can take a look and see that your trust is not misplaced.
You can try Skimle for free to see how well it fits your analysis requirements,
About the Author
Henri Schildt is Associate Professor of Strategy at Aalto University School of Business and co-founder of Skimle. He has published dozens of peer-reviewed articles using qualitative methods, including work in Academy of Management Journal, Organization Science, and Strategic Management Journal. His research focuses on organizational strategy, innovation, and qualitative methodology.
Henri developed Skimle after years of frustration with existing qualitative analysis tools that failed to leverage AI's potential while maintaining academic rigor. Google Scholar Profile
