When the leadership team at a global online university first looked at their AI-sourced traffic, the numbers seemed impressive, nearly half a million sessions coming from LLM search. But beneath the surface was a deeper problem; the visibility was high, yet the signal was low. Large language models were mixing up tuition details, misquoting program information, and presenting inconsistent descriptions of the school across platforms.
The result was a familiar frustration for many growth teams: lots of people talking about the brand, very few taking the next step. Conversions lagged, misinformation spread quickly, and the school had little control over how AI systems represented its identity. Their goal wasn’t just more traffic: it was more qualified sales.
That became the brief: fix the narrative across AI ecosystems, ensure every model told the same, accurate story, and turn scattered visibility into measurable enrollment growth.
The Turning Point: From Being Mentioned to Being Understood
Our first audit revealed the scale of the challenge. Across ChatGPT, Perplexity, Copilot, and Gemini, more than a third of answers contained factual errors, outdated pricing, or partial descriptions of the university’s accreditation. Even when the school ranked highly in search, AI assistants often pulled from unverified sources, often outdated, or competitor-adjacent content.
This was the moment everything clicked. If we could correct the information foundation that models rely on, then every downstream metric – sessions, trust, sentiment, conversion rate, could improve. Instead of chasing prompts or one-off optimizations, we built a Generative Engine Optimization (GEO) strategy around one principle: AI systems should quote the university’s true facts, in the university’s own words.
Over the next four months, that strategy transformed performance. AI sessions grew steadily, but more importantly, sales (application submissions) surged to 74,154, conversion rates lifted across every major LLM, and brand sentiment hit its highest point to date.
What We Did: Building an Information Ecosystem AI Could Trust
We approached the challenge with one principle in mind: AI answers only improve when the information behind them improves. Instead of fixing individual responses or chasing prompt hacks, we rebuilt the entire environment that large language models rely on to understand the university.
Fixing the Facts at the Source
The first step was removing noise and contradictions. Tuition, accreditation, deadlines, and program details appeared across hundreds of URLs and external listings. Many were slightly different, which caused AI models to blend or misinterpret information.
We carried out a deep cleanup that included:
- Aligning all tuition, accreditation, and program descriptions across the main site
- Consolidating duplicate or outdated pages
- Updating schema, FAQ blocks, and structured data so facts were surfaced clearly
- Correcting legacy third-party listings that LLMs relied on without verification
Once that foundation was stable, we created short, unambiguous answer blocks for the questions prospective students ask most. These became the “canonical truth” that AI systems now quote more consistently.
Making the Right Pages Easy for AI to Find
With consistent information in place, we focused on improving the university’s technical footprint. We simplified crawl paths, reworked sitemaps, and ensured that verified content was always the easiest for AI systems to access.
Key improvements included:
- Streamlining metadata and internal linking
- Reducing URL variants that confused indexing
- Cleaning and tightening robots and crawl directives
- Prioritizing authoritative pages over older, lower-value ones
This allowed models to pull from the right sources with much less friction.
Strengthening Trusted Citations Across the Web
LLMs look for patterns and consensus. If five trusted sources repeat the same details, that information becomes “fact” inside the model.
We expanded the university’s influence by:
- Securing high-authority PR and program-level mentions
- Updating partner and directory profiles with verified data
- Increasing brand-owned citations and reducing competitor leakage
This broader ecosystem helped models anchor on accurate information, not scattered third-party guesses.
Monitoring AI Answers Like a Product
Every week we ran audits across ChatGPT, Perplexity, Copilot, Gemini, and Claude. These reviews flagged inaccuracies early and allowed us to ship improvements quickly. Over time, small weekly changes led to large, compounding gains in accuracy, sentiment, and conversion performance.
The Results: From Strong Visibility to Record Applications
Two quarters after launching the GEO program, the university wasn’t just being mentioned more; it was being represented correctly, consistently, and positively. And that shift had a direct impact on student acquisition.
A Surge in Applications and Conversion Rates
Applications jumped to 74,154, a 71.6% increase compared to the earlier period. The application conversion rate rose by 0.72 percentage points, a substantial lift in higher education, where even marginal gains move thousands of students.
What became clear was that better information leads to better applicants. Users arriving from AI sources were more informed about tuition, timelines, and accreditation, and much more ready to apply.
AI Channels Became High-Intent Converters
ChatGPT quickly emerged as a top performer with a 6.58% conversion rate. Perplexity, Copilot, and Gemini also saw strong conversions between 2.8% and 3.4%, outperforming most traditional organic search channels.
A few highlights:
- ChatGPT: 6.58% CR
- Perplexity: 3.19% CR
- Copilot: 2.85% CR
- Gemini: 3.41% CR
These channels didn’t just drive traffic, they drove sales.
AI Answers Became Far More Accurate and Positive
Our weekly tracking showed a dramatic improvement in how models positioned the university:
- 76% of answers placed the university in the top position
- Only 2% fell into the bottom position
- 96% of answers carried positive or mixed sentiment
This meant AI platforms were not just aware of the university, they were recommending it.
A Healthier, More Controlled Citation Landscape
Citations began shifting in the university’s favor:
- Brand-owned citations increased to 17%
- Competitor mentions dropped to 5%
- High-authority third-party support remained strong
The result was a more reliable and balanced ecosystem around the school’s online identity.
Traffic Quality Improved Faster Than Traffic Volume
While sessions grew a modest 11.7%, the real story was the higher intent behind those sessions. Students arriving from AI assistants now understood cost, programs, and accreditation before landing on the site, making them far more likely to convert.
Visibility improved, but meaningful visibility improved even more.

- AI Performance over Time: Sessions from ChatGPT, Perplexity, Copilot, and Gemini climbed steadily, with spikes aligning to major content and technical releases. The trend line reflects consistent growth and compounding improvements.
Why It Worked: A Strategy Built for How AI Actually Thinks
The project succeeded because it was built around how modern AI systems gather, evaluate, and synthesize information. Instead of trying to manipulate prompts or chase superficial ranking tricks, we focused on the levers that truly influence LLM outputs.
We Created a Single Source of Truth
AI models blend information from thousands of signals. When those signals conflict, the model improvises. By aligning facts across every major touchpoint: on-site content, structured data, third-party listings, PR, and program descriptions, we eliminated the contradictions that caused misinformation in the first place.
Once the “truth” was consistent and easy to find, models naturally gravitated toward it.
We Made Verified Information the Most Discoverable
LLMs rely heavily on crawlable structures, schema, sitemaps, and canonical patterns to understand which information is trusted. By simplifying technical architecture and cleaning outdated content, we removed the clutter that had been diluting authoritative pages.
This meant AI systems could interpret the university’s official information more confidently and more often.
We Built Consensus Across the Web
LLMs upgrade confidence when multiple high-authority sources repeat the same facts. Strengthening citations, while reducing competitor adjacency, created a clearer narrative for models to learn from. The more consistent the external ecosystem became, the more accurate the answers.
We Treated AI Search as a Living System
Weekly audits weren’t just checks; they were feedback loops. Each review highlighted small errors, outdated details, or shifts in LLM behaviour, allowing us to respond quickly. This product-like approach built momentum and prevented regressions.
The combination of stable information, strong citations, and constant iteration produced results that traditional SEO alone could never achieve.

- Position & Sentiment: In the most recent quarter, 76% of answers appeared in the top position, 22% in the middle, and only 2% at the bottom. Sentiment analysis showed 96% positive or mixed responses, demonstrating strong brand alignment.

- Citations by Owner: Brand-owned citations rose to 17%, third-party sources accounted for 78%, and competitor mentions dropped to 5%. Increasing our share of first-party citations remains a key objective.

Together, these visuals highlight how disciplined GEO work not only boosted engagement but also improved the quality and trustworthiness of AI responses.
What’s Next: Scaling GEO From Brand-Level Wins to Program-Level Precision
The next stage focuses on deepening the university’s AI footprint so that models can not only recommend the institution, but also match prospective students to specific programs.
Expanding Program-Level Entities
Right now, LLMs understand the university well at a brand level. The next step is making each degree: business, computer science, health science, and more, an independent, well-defined entity with clear metadata, citations, and FAQ structures.
This allows AI systems to suggest specific programs when users search for career paths, skill development, or degree options.
Automating Freshness for Pricing and Deadlines
Tuition, enrollment dates, and scholarship information change frequently. We’re building automated freshness cycles for high-value facts so AI systems always see the most current, authoritative version.
Strengthening First-Party Citations
We aim to increase brand-owned citations from 17% to 25%+, making the university’s official information even more dominant across the web ecosystem.
Improving Attribution for AI-Assisted Conversions
Not all students apply directly after an AI session; many begin their research in ChatGPT or Perplexity and return days or weeks later. We’re refining our attribution models to better capture these long, AI-assisted journeys.
Together, these next steps will build on the momentum created so far and help the university maintain leadership as AI search continues to evolve.
Conclusion: Turning AI Visibility Into Measurable Enrollment Growth
Generative Engine Optimization isn’t about gaming prompts or tricking models. It’s about shaping the information environment so that AI systems understand, and accurately represent, who you are. For this global online university, aligning facts, strengthening citations, and improving technical clarity transformed millions of AI-driven sessions into 74,154 applications in just two quarters.
The lesson is simple: when you control your story across AI ecosystems, you control the outcomes that matter.
If you’re ready to turn AI visibility into real business results, let’s talk.
Igal Stolpner co-founded Webify after leaving his role as VP Growth at Investing.com, where he started as the first employee back in 2007, and left when the company was sold in 2021.
Igal led the company’s growth from zero to over 250M monthly sessions, with over 50% of traffic coming from organic search, taking it from zero to among the top 200 sites globally.
