AI visibility platforms, like Radix or Promptwatch, have discovered G2 to be probably the most cited software program evaluation platform.

Radix analyzed 10,000+ searches on ChatGPT, Perplexity, and Google’s AI Overviews and located G2 has “the best affect for software-related queries” with 22.4%.
Moreover, PromptWatch discovered G2 to be probably the most seen B2B software program evaluation platform throughout 100 million+ clicks, citations, and mentions from AI search like ChatGPT, tracked throughout 3,000+ web sites.
The info means that G2 has a significant impression on software program searches on LLMs (e.g., ChatGPT, Perplexity, Gemini, Claude, and so forth.). As an impartial researcher, I wished to see if I may detect a relationship in our information and validate the claims.
To get there, I analyzed 30,000 AI citations and share of voice (SoV) from Profound, which span throughout 500 software program classes on G2.
- Citations: A website, G2 on this case, is cited in an LLM with a hyperlink again to it.
- SoV: The variety of citations a website will get divided by the whole out there variety of citations
What the information revealed
Classes with extra G2 Opinions get extra AI citations and the next SoV. When ChatGPT, Perplexity, or Claude must advocate software program, they cite G2 among the many first. Right here’s what I discovered.
1. Extra critiques are linked with extra citations
The info exhibits a small however dependable relationship between LLM citations and G2 software program critiques (regression coefficient: 0.097, 95%, CI: 0.004 to 0.191, R-squared: 0.009).
Classes with 10% extra critiques have 2% extra citations. That is after eradicating outliers, controlling for class dimension, and utilizing conservative statistical strategies. The connection is clear.

2. Classes with extra critiques have the next SoV
I additionally discovered a small however dependable relationship between G2 Opinions and SoV (regression coefficient: 0.113, 95% CI: 0.016 to 0.210, R-squared: 0.012).
If critiques rise by 10%, SoV will increase by roughly 0.2-2.0%.

What does all this imply?
The variety of citations and the SoV are primarily decided by elements outdoors this evaluation: model authority, content material high quality, mannequin coaching information, natural search visibility, and cross-web mentions. Opinions clarify lower than 2% of the variance, which suggests they are a small piece of a bigger puzzle.
However why G2 particularly?
AI fashions face a verification downside. They want scalable, structured alerts to evaluate software program high quality. G2 supplies three attributes that matter: verified patrons (reduces noise), standardized schema (machine-readable), and evaluation velocity (present market exercise). With greater than 3 million verified critiques and the best natural visitors in software program classes, G2 affords sign density that different platforms cannot match.
A ten% improve in critiques correlating with a 2% improve in citations sounds modest. However think about the baseline: most classes obtain restricted AI citations. A 2% elevate on a low base could also be virtually negligible. Nevertheless, in high-volume classes the place a whole lot of citations happen month-to-month, a 2% shift may meaningfully alter aggressive positioning. In winner-take-most classes the place the highest three outcomes seize disproportionate consideration, small quotation benefits compound.
What issues is not your uncooked evaluation depend, however your place relative to rivals in your class. A class with 500 critiques the place you maintain 200 positions has a distinct impression than a class with 5,000 critiques the place you maintain 200.
Why this issues now
The shopping for journey is remodeling. In G2’s August 2025 survey of 1,000+ B2B software program patrons, 87% reported that AI chatbots are altering how they analysis merchandise. Half now begin their shopping for journey in an AI chatbot as an alternative of Google — a 71% soar in simply 4 months.
The true disruption is in shortlist creation. AI chat is now the highest supply patrons use to construct software program shortlists — forward of evaluation websites, vendor web sites, and salespeople. They’re one-shotting selections that used to take hours. A immediate like “give me three CRM options for a hospital that work on iPads” immediately creates a shortlist.
Once we requested patrons which sources they belief to analysis software program options, AI chat ranked first. Above vendor web sites. Above salespeople.
When a procurement director asks Claude to share the “finest CRM for 50-person groups” in the present day, they’re getting a synthesized reply from sources the AI mannequin trusts. G2 is a type of sources. The software program trade treats G2 as a buyer success field to examine. The info suggests it is develop into a distribution channel — not the one one, however a measurable one.
What actions you possibly can take primarily based on these analysis insights
One of the best ways to use the information is to spend money on critiques and G2 Profiles:
- Write a profile description (+250 characters) that clearly highlights your distinctive positioning and worth props.
- Add detailed pricing info to your G2 Profile.
- Drive extra critiques to your G2 Profile, corresponding to by linking to your G2 Profile web page from different channels.
- Provoke and interact with discussions about your product and market.
Methodology
To conduct this analysis, we used the next methodology and strategy:
We took 500 random G2 classes and assessed:
- Accepted critiques within the final 12 months
- Citations and SoV within the final 4 weeks
We eliminated rows the place:
- Citations within the final 4 weeks are underneath 10
- Visibility rating is 0 %
- Accepted critiques within the final 12 months are under 100 accredited critiques
- Opinions have been vital outliers
For the end result, the median was unchanged, which helps that pruning didn’t bias the middle of the distribution.
We analyzed the regression coefficient, 95% confidence interval, pattern dimension, and R-squared.
Limitations embody the next:
- Cross-sectional design limits causal inference: This evaluation examines associations at a single cut-off date (critiques from the prior 12 months, citations from a 4-week window). We can not distinguish whether or not critiques drive citations, citations drive critiques, or each are collectively decided by unobserved elements corresponding to model energy or market positioning. Time-series or panel information can be required to ascertain temporal priority.
- Omitted variable bias: The low R² values (0.009-0.012) point out that evaluation quantity explains lower than 2% of the variation in citations and SoV. The remaining 98% is attributable to elements outdoors the mannequin, together with model authority, content material high quality, mannequin coaching information, natural search visibility, and market maturity. With out controls for these confounders, our coefficients could also be biased.
- Aggregation on the class degree: We analyze classes fairly than particular person merchandise, which obscures within-category heterogeneity. Classes with an identical evaluation counts however completely different distributions throughout merchandise might exhibit completely different AI quotation patterns. Product-level evaluation would supply extra granular insights however would require completely different information assortment.
- Pattern restrictions have an effect on generalizability: We excluded classes with fewer than 100 critiques, fewer than 10 citations, or excessive outlier values. Whereas this improves statistical properties, it limits our means to generalize to small classes, rising markets, or merchandise with atypical evaluation patterns. The pruning maintained the median, suggesting central tendency is preserved, however tail habits stays unexamined.
- Single platform evaluation: This examine focuses completely on G2. Different evaluation platforms (like Capterra, TrustRadius, and so forth.) and knowledge sources (like Reddit and trade blogs) additionally affect AI mannequin outputs. G2’s dominance in software program classes might not lengthen to different verticals, and multi-platform results stay unquantified.
- Mannequin specification assumptions: We use log transformations to handle skewness and assume linear relationships on the reworked scale. Various purposeful varieties (like polynomial and interplay phrases) or modeling approaches (corresponding to generalized linear fashions and quantile regression) may reveal non-linearities or heterogeneous results throughout the distribution.
- Measurement issues: Citations and SoV rely upon Profound’s monitoring methodology and question choice. Totally different monitoring instruments, question units, or AI fashions might produce completely different quotation patterns. Assessment counts rely upon G2’s verification course of, which can introduce choice results.
These limitations recommend our estimates needs to be interpreted as suggestive associations fairly than causal results. The connection between critiques and AI citations is statistically detectable however operates inside a posh system of a number of affect elements.

