What is Relative Insight
Relative Insight is a text analytics platform that compares language across two or more datasets to identify statistically significant differences in word and phrase usage. It is used by insights, marketing, and customer experience teams to analyze open-ended survey responses, customer feedback, social content, and other unstructured text. The product emphasizes comparative analysis (e.g., segment vs. segment, brand vs. competitor set, time period vs. time period) and provides outputs intended for reporting and decision support.
Comparative language analysis focus
The platform is designed around comparing datasets rather than only summarizing a single corpus. This supports common research and CX workflows such as segment comparisons, pre/post campaign analysis, and market-to-market language differences. The comparative approach can reduce manual effort when the primary question is “what is different between groups,” not just “what are the themes.”
Works with varied text sources
Relative Insight can be applied to multiple forms of unstructured text, including survey verbatims and other feedback channels. This makes it suitable for organizations that need a consistent method across research studies and ongoing feedback programs. It can complement quantitative results by adding explainable language differences tied to segments or time periods.
Outputs suited for reporting
The product produces interpretable results such as lists of over-indexing terms and phrases between datasets. These outputs map well to stakeholder reporting and can be incorporated into insight narratives without requiring advanced data science tooling. For teams that primarily need defensible language comparisons, this can shorten analysis cycles.
Less end-to-end CX suite
Relative Insight focuses on text comparison and does not function as a full experience management platform. Organizations may still need separate systems for survey program management, omnichannel case management, and closed-loop workflows. This can increase integration and governance work when building an end-to-end feedback stack.
Limited advanced modeling breadth
Teams looking for broader machine learning capabilities (custom model development, extensive feature engineering, or large-scale predictive pipelines) may find the platform narrower than general-purpose analytics environments. Some advanced use cases may require exporting data to external tools for modeling and automation. This can add operational complexity for analytics-heavy organizations.
Comparisons require careful setup
The value of the results depends on how datasets are defined, cleaned, and segmented. Poorly matched comparison groups or inconsistent preprocessing can lead to misleading differences. Users often need clear methodological guidance and governance to ensure comparisons remain valid across teams and time.