Building on the research and methodology outlined above, we developed
Illume as our response to the challenges of civic engagement
data analysis. Illume is a web application that turns complexity into clarity,
empowering users to make informed decisions with confidence.
At its core, Illume is guided by two principles:
- Efficiency through automation: accelerating the manual, resource-intensive process of qualitative analysis.
- Trustworthiness through transparency: ensuring results are interpretable and that professional judgment is never replaced.
Our design philosophy is human-centered: Illume augments the expertise of planners rather than substituting for it. By streamlining repetitive coding work, the tool frees users to focus on interpreting insights and applying them strategically.
1. Thematic Analysis
After the data is uploaded, the key feature of Illume—thematic analysis—is applied.
Thematic analysis is a proven methodology in qualitative research that transforms unstructured
public feedback into meaningful, organized categories. This type of analysis was elaborated on in
Section 2.1, under the problem context.
By applying natural language processing to civic engagement data, Illume can efficiently process
hundreds of public comments in under 20 seconds. For example, Illume automatically groups together
concerns about “parking” even if respondents phrase it differently, such as “not enough spaces”
or “nowhere to park.” In our sample dataset, we used actual public feedback from the Stage 2 ION
proposal at the Region of Waterloo (see Appendix D).
A benefit Illume offers is consistency, applying the same framework to every response,
compared to humans who may vary in approach (even due to mood). While this consistency is valuable,
we recognize that AI systems are not immune to bias or error. This is why the
human-in-the-loop process (discussed in Section 6.2.2) is critical for oversight and
transparency, building on the suggestions of Borchers et al. Ultimately, this automated step
reduces the manual effort of sorting comments into themes, allowing professionals to focus on
higher-value analysis.
2. Human-in-the-Loop Validation
The second key feature is Illume’s human-in-the-loop process, which addresses
concerns about trust and explainability in AI applications. Illume presents AI-generated theme
categorizations for review, allowing users to accept, reject, or modify each suggestion based on
professional judgment. By documenting these decisions, Illume ensures transparency throughout the
process and builds user confidence by giving them control over the final analysis.
As illustrated in the HITL figure below, the workflow begins when input data is processed by the AI system
to generate initial themes. These results are then reviewed by a human expert (planner or analyst),
who can refine categorizations with edits, prompts, or feedback. The final output is always
human-reviewed and approved before being shared with stakeholders or the public.
In Illume, this oversight occurs twice: first, when the user accepts or rejects whether a comment
belongs in a suggested theme; and second, on the rejection screen, when the user decides if the
rejected comment should be reassigned to another theme. Screenshots of this process are provided in
Appendix B. In practice, this means the planner still reads through each comment, applying their
expertise just as they would in traditional thematic analysis.