Subscribe to our blog

Most SEOs are of the mind that CTR impacts ranking. Google has always said the opposite, at least publicly. Who’s right?

Recently, we started evolving our thinking and moving towards a way that CTR could actually impact ranking, but not do so directly. I discuss this thought process and how it might impact your strategic decisions in today’s video.

Video Transcription

Hi, I’m Ross Hudgens, founder of Siege Media, and today I want to talk to you about why click-through rate is not a ranking factor, but, in fact, it might still impact ranking.

I, for a long time, was of the mold that ‘obviously click-through rate is a ranking factor’. How could Google possibly not use that to change the results in terms of what people click, what people dwell on, and things like that?

And that’s been the uniform thought process in the SEO world for a long time. In the last couple months I had an ‘a-ha’ moment about how click-through rate can both not be a ranking factor but still impact ranking, which I think is more reliable.

If you think about what would drive you to click something, like brand, or freshness, or an attractive title: those same things would cause you to dwell longer, stick longer, and would also build trust in what you’re clicking. That would then inform a longer experience, and hopefully an experience that drives you to actually be satisfied with your result.

If you click something that you like, you’re more likely to dwell on it because of that expectation of what that title says, you’re less likely to go back to the search result, click something else, or search another result and click something else.

And that’s something we do know, or more reliably know, that Google would use in a ranking factor, because it’s kind of “pogo-sticking”, as you’re clearly showing that you’re not satisfied with that result.

If you’re going through a search result, let’s call it life insurance, and you see, in this example, State Farm and NerdWallet. You immediately recognize that these are two relatively large brands. Obviously, everyone probably knows who State Farm is, and their brand name immediately evokes trust when you see that in the title tag.

That is a click-through rate signal that’s going to make you click that more often, but then it’s also setting the expectation that you know what you’re going to get and you will have a good experience here, and that combination will probably lead you to not go back.

But if you scroll further down this result, you see other brands, such as Policygenius.

You might not recognize Policygenius if you’re not in the SEO world. They have this rich snippet rating that shows 540 reviews. Technically this might be a click-through factor to get that star rating, but I think what differs here in what they’re doing, is you might click this more often, which is probably a good thing for them.

I don’t believe the stars that they apply there actually infers any trust or signal that actually gets you to dwell longer, because it’s not actually doing anything to show that we should trust these reviews.

It’s only drawing your eye in this instance, as compared to the State Farm example, their title tag click-through rate example actually got you to stick.

And other examples of this beyond brand are things like the date. So if you’re on a result, and you search best headphones, and all the results are old but you see one that’s updated in the last week, your mind will immediately trigger the expectation for the article that it’s updated, it’s fresh, it’s up-to-date, you’re ready to go.

Those things will set the standard that will probably make you less likely to bounce back. If you compare that to that same search result, “best headphones”, you go to something that’s a year old, you click that, and you immediately know that it’s slightly old.

So if you go there and you see something even remotely off, or that triggers that it’s slightly out-dated or might not be trustworthy, you’re more likely to bounce back to the search result and click something else.

And you can see how that kind of front door experience of the search result can inform whether or not you dwell, and stay, and stick overall. Other things that might impact this are number of results. If you say you have 50 results and your competitor has 25, obviously if you deliver on that expectation of 50, people are going to dwell longer.

There’s a reason why curb appeal increases the value of a home. For the same reason, a title tag can increase the value of a page.

I believe a good click-through rate trigger is actually informing the element of that result itself that would make you want to stick and stay on that overall. Bad click-through rate triggers, which I don’t believe could inform ranking, are examples like what we saw with the Policygenius rating result.

This rating on the search result is not actually saying anything about life insurance or about the quality of their life insurance, at least from my vantage point.

I’m an SEO of course, so I’m a pessimist about the kinds of people that use this. And Policygenius is generally doing a really good job with their SEO, so I shouldn’t disparage them that much, but I think this is a good example of a signal that actually would not impact their rankings. Yes, you might click it more on average, but I don’t believe the rankings would be impacted by that.

Hopefully now you can see that because click-through rate could be manipulated and how if “I’m Google, and I’m understanding this,” I might not use that as a signal. Also, why would I need to compare click-through rate, when I have dwell time and satisfaction once you click through as indicators? That’ll all be taken care of just by the number of visits, and repetition, and trustworthiness that is connected through that front door experience of the title.

So my overall recommendation here is: yes, optimize for click-through rate. Those same things are good experiences that transfer to on page; if it’s a good click-through rate experience, that is a real one, not a manipulated one like star rating that doesn’t apply. Hopefully, this helps you see the matrix a little bit.

I thought it was kind of an interesting thought once you get to that, and it makes sense that Google so often has said that it’s not a signal. And it’s very possible if we use this thought process, and I think very reasonably, it’s not a direct signal, but yes, it connects to ranking almost definitely.

Does clicking that result specifically draw higher rankings? I don’t believe so, and it makes sense to me now why that might be the case.

So hopefully, you’ve found this explanation useful. If you have any thoughts on it, I’d definitely love to hear them in the comments. Please give us a thumbs-up, subscribe, and let me know what you thought. Thanks.

Related Posts

Comments

  • Not a ranking factor but may impact ranking, potato, potatoe.

    • Yeah maybe, I think it could impact how much effort you put into CTR, though. With a list of priorities slotted, and you knew some star rating Schema was unlikely to *actually* make an impact on ranking despite slightly impacting CTR, I think that could inform where you spend your time and effort. Trying to get more CTR is generally always a good thing, but as every company has priorities they need to juggle, this might be one they push off if this is believed to be true.

  • ah so this makes some sense but doesn’t explain the success of those real time CTR experiments, where all these variables are controlled for. couldn’t find the one Rand did on twitter but from memory, he asked followers to click a result on a given SERP, and watched the rankings improve

    • I do remember that test — it happened at MozCon/I believe he tried to replicate. Was a bit old though and never stuck. I think of more recent tests and nobody has been able to prove CTR has an impact. I wonder if Rand’s test may also create a change because of freshness/Google thinking they’re missing something important, but that generally changes back.

      I specifically think of this one from Dejan SEO: https://dejanseo.com.au/ctr-manipulation/ which showed no impact. I think this would show what I’m saying has some credibility.

      • thanks for that! I hadn’t seen this or even thought of using mturk for this kind of testing. would be great if any new tests tried to parse out those factors over a longer period of time or across more types of queries, maybe longer tail and less traditionally stable high volume terms like “life insurance”