Once again we have another really great hangout. John was asked some very interesting questions including whether it hurts to have spammy pages in the index, how to tell if Google can see links on the mobile version of your site and some interesting info on paginated content.

Once spammed pages have been removed, does it hurt a site if they are still in the index?

0:33

This was a really interesting question! John Mueller has stated in the past that when Google assesses quality for a site, every page that is indexed counts towards this assessment. As such, my thought was that if Google is seeing a large number of pages in the index that are there as a result of hacking, then this could contribute to an assessment of low quality.

In this question, the site owner explained that they were previously hit with a user generated spam manual action. They noindexed all of the offending pages. That didn't work to get them out of the index (my thought: most likely because Google rarely crawled these as they were low quality.) Next, they 410'd these pages, but that didn't work to get them out of the index. So, they created an XML sitemap with 3000 urls to try to get Google to recrawl them. The hope is that Google would see the 404 and then drop the pages.

John had a good response, but here is the part that was the most interesting to us:

John Mueller december help hangout

"From our point of view it's not that you need to take care of this but we are kind of taking care of it for you in the search results already. It wouldn't negatively affect your site to do something to kind of have those removed a little bit faster but it's not something you would urgently need to do."

Later, answering the same question, John mentioned using the URL removal tool to get these removed from the index faster. He said,

"So it's something where if you were to remove them with maybe the URL removal tool it would be the same thing as if the web spam team. We're flagging them as spam because that's kind of what they were, they’re people taking advantage of the search results pages on your site. That's okay, I know that it’s obnoxious, we're kind of used to that and we try to kind of surgically remove those specific pages from the search results and if we can do that then we don't need to affect the rest of the site."

Our advice, when we review sites with similar problems is to do as much as you can to get these pages out of the index. I wonder though if this is even necessary? Our thought is that there is still a possibility that Google's algorithms could count these, so it's better to be safe than sorry and go ahead and remove pages like this from the index.

I think it's important to mention here though that this site had a manual action, which means that Google is already ignoring these urls. In a case of a site that did not have a manual action, getting these pages removed from the index is likely a really good idea.


Summary: You could use the url removal tool to get these pages removed more quickly. However John implied that Google already knows to ignore these. In this case, this is likely because this site had a manual action. Our advice: If you have obviously low quality pages in Google's index, do all you can to remove them.


 

How can you tell if Google is able to see links on your site that are reliant on javascript?

6:32

The question asked was about a site that uses JavaScript to create a drop down list of links in their menu. They wanted to know how to determine whether Google is seeing these links.

John Mueller december help hangout"At the moment the site isn't being mobile first index so we will use the desktop version for crawling and indexing I could imagine depending on what all is on the site and the mobile version it might be that we would switch that to mobile versus a mixing and then that would be something that might make it harder to crawl the rest of the website but for evaluating when a website is ready for mobile first indexing we also take into account the links on the site especially the internal links so if for example those internal links are missing completely on the mobile version or you just have like a text field and type the query rather than using a drop-down then that's something where we probably wouldn't switch the site over to a mobile first index but still it seems like something that's probably worth double checking on your site to make sure that the drop-down also works on mobile and then you should be able to see that in the mobile mobile-friendly test where if you look at the rendered HTML it should have the drop down and the links there too so my guess is at the moment it's not critical but it's definitely something I wouldn't put off for too long."


Summary: This is important for sites once they are moved to mobile first indexing. Use Google's mobile friendly tester and see if the rendered HTML can see the drop down and links.


 

Why would a page be marked in the GSC index coverage report as noindexed when it is not?

19:54

John Mueller december help hangout"I took a quick look at this particular URL and from what I can tell in our systems, we last crawled or processed this URL a couple months ago. So, it might be that there’s no meta robots tag on there now but maybe there was one in the past. In general, this is something that does take a bit of time to be reprocessed, so a couple of months is kind of normal for the URL that might not be your homepage or with your primary page on a site. So, what will probably happen there is that we will recrawl and reprocess this at some point and see that there is no meta robots tag anymore and we’ll index it normally. You can also encourage us to do this a little bit faster by using the URL inspection tool and using the live test and from there, I believe, submitting it to indexing. So, that’s something that helps us to understand that a URL has changed and we should make sure that we have the most recent version so that we can reflect that in the search results."

 


Summary: This is probably because the page used to have a noindex tag, and hasn't been crawled since it was removed. You can speed up crawling by using the URL inspection tool and using the "live test" and submitting the page to the index.


 

Site's traffic dropped after being moved to mobile first indexing. Are the two connected?

21:30

John Mueller december help hangout

"So, in general moving to mobile first indexing is something that happens very quickly and if there were any issues associated with the site in the mobile version, then on the one hand we would avoid moving it to mobile first indexing and on the other hand you would see those changes pretty much immediately as soon as we had it in the mobile first indexed version. So, if you’re seeing changes in September or sometime later, then those would be normal organic ranking changes as they always have. I have seen on twitter that, I think in September or august, there were some ranking changes that people were seeing so I could imagine that these are just kind of the normal core ranking algorithm changes that we always have. It wouldn’t be related to the mobile first indexing."


Summary: Google tries to make it so that they only move sites to mobile first indexing when they're ready to move without causing negative effects. However, if you did see a drop that was related to MFI, the drop should be present almost immediately after the move happened. These drops were likely due to algorithmic updates (i.e. Aug 1 and Sep 27 were big updates).


 

When using rel=prev/next, do links to the previous and next pages need to be present?

22:45

John Mueller december help hangout

"Yes, if we can understand which pages belong together with rel next and rel previous. But if there are no links on the page at all then it’s really hard for us to crawl from page to page. So, using the rel next and rel previous link elements in the head of a page is a great idea to tell us how these pages are connected. But you really need to have on page normal HTML links that go between one page to the next page and maybe to view all or something like that. So, that’s really still recommended."

 


Summary: It's not enough to just use rel=prev/next. The links to those pages must be present on the page. A link to a "view all" version is recommended as well.


 

Does Google support nested sitemap index files where one index sitemap file references another one?

29:34

John Mueller december help hangout"No, we don’t support that. I don’t believe anyone has supported that. I believe that is also specifically called out in the sitemap spec as something that is not supported. You generally need to set up separate sitemap index files and submit them separately if you need to go a lot further."

 


Summary: No


 

Should paginated pages be indexed?

30:35

This was an interesting question. The questioner was asking about canonicals on paginated pages. They asked whether the last page should have a self referencing canonical and the answer was yes. Then they asked whether that page should have a noindex tag on it.

John Mueller december help hangout"That’s a bit harder. That’s essentially a question of what page do you want to have indexed in the search results. And how are the individual products on your site linked among each other. That’s something where some sites decide to say we want to have the paginated pages indexed, and that can be perfectly fine. If you have good content on those pages and we can pick those pages up and show them in the search results separately, that’s really useful to have. If you start scrolling down through the category pages on page 5, 6, 7, 8 then you start to have repetitive content across the different categories. You don’t get a lot of really useful pages out of those paginated pages and in those cases, it might make sense to just noindex everything after a certain page. That way you can focus more on the actual pages that you do want to have indexed. So, there’s no really hard rule there. It’s more really a matter of making sure that the pages that you provide for indexing are pages that you want to have indexed and that you want your site to be found for.


Summary: Would users land on this page from search? Do they have good content or is it just content that is already on other pages? If users wouldn't want to land on this page, then noindex.


Should older, outdated content be noindexed?

31:57

Another good question!

"In my industry we have annual enrollment periods. We have a top page that discusses annual enrollment period. I also publish new content every year targeting a year in the query. And we’ve done that in 2018-2019. Should I canonicalize these pages at some point? If so, what should be the top-level page or the most recent page?"

John Mueller december help hangout"I imagine that’s always kind of tricky because on the one hand you might want to have the older content indexed as well so if people are explicitly for example: what were the requirements back in 2004? And you have content from 2004, then that would be useful to have. On the other hand, if nobody is actively looking for this older content then at some point you might as well cut that off and say okay and just point everything towards your general page. Another idea here is to say that you have one page which is just for the current version and you move all the older one’s kind of to an archive version.

So, you have in this case an enrollment page for the current year which is just on a generic URL like whatever your website is /enrollment. And for the previous years, you just kind of move that over to an archived version where you have for example: enrollment/2017. That’s generally a pretty useful strategy because that way you build up that generic enrollment page. Over the years you’ll collect more links to that page, people will see that page as being quite relevant because it’s regularly updated and always has the current version on there. We’ll still be able to find the older version if people are exclusively looking for that. But we’ll always be able to pick up the current version rather quickly.

You can use this strategy any time you have something that’s regularly repeating. That could be an event such as a conference, or if you have regularly updated products. And you could have separate archive pages for all these products/events, all of the older versions. That way the generic product page will grow in value over time. And the individual versions for the older ones, they would be around if people are explicitly looking for that, we can show that in search. But they wouldn’t get in the way of the most recent version showing up fairly well, So, that’s a general strategy that you can use across different types of products and website where you have this periodic update of the content and always want to make sure that the current version is as visible as possible."


Summary: The best solution here is to have one page for the current version, (such as /enrollment) and then move old content to pages like /enrollment/2017. That way, the old content is available for people to find, and the new content is on a page that is established and possibly has links pointing to it from the past.


 

Is it bad to have different types of structure data (i.e. microdata and JSON-LD) on the same page?

39:06

John Mueller december help hangout"So, in general that’s not bad for us if we find different ways of structured data that have the same information. But usually that means that you have to maintain separate versions of the same markup on your pages which sometimes means that things get broken and like one version suddenly shows this markup, and the other version shows slightly different markup. So my recommendation would be to stick to one version, if at all possible, if you’re transitioning between like microdata to JSON-LD then maybe you have period in between when you have both of the versions on your pages. As long as you can make sure that these two versions have the same content then that’s generally okay. But in the long run I’d really recommend making sure that you have one version of markup on your pages especially for the structured data markup and that that version is the correct version.”


Summary: It's not really a problem, but it makes more sense to stick to one format.


 

Can low quality comments affect a page's quality assessment?

42:01

This question was not specifically about this topic. Rather, the person was asking whether we should be replying to comments on our site. But we thought that this part of John's response was important.

John Mueller december help hangout

"Ultimately it’s your website, you’re kind of responsible for what is shown on your website and we will rank your pages based on what you provide on your website. And if you provide us comments that are low-quality that are kind of problematic, then that’s what we will use for indexing. So that’s always kind of a trade-off there with regards to how much work you spin to focus on making sure that those comments are actually good with regards to approving or blocking comments and kind of maintaining all of that. And some sites just say ‘oh I don’t have time to deal with all this I’ll just block comments completely’. Ultimately that’s kind of a strategic decision on your side and not something that would kind of play into the search side directly.”


Summary: If low quality comments are indexed, this can be seen as a sign of low quality for your website. If you're going to use comments, be sure to moderate them. Our note: We think the same is true of high quality comments. Good user generated content can help improve your quality!


 

 


If you like stuff like this, you'll love my newsletter!

My team and I report every week on the latest Google algorithm updates, news, and SEO tips.


Full transcript

Question 0:33 - We've been hit by manual actions back in September we try we've been trying to solve this. It is a user-generated spam manual action.  Basically, some people took advantage of our internal search and generated a bunch of URLs and have Google crawl them. There are thousands out there. So, we had no indexed them, we have not been blocking that path in in the robots.txt file. But no indexing didn't really work so we are now returning 410 instead and that didn't work. Meaning I submitted a reconsideration request but that didn't work. So now we've created an XML sitemap with about 3,000 URLs and have been trying to get Google to crawl them and recognize the 410s. I don't know that we are on the correct path or how long this is going to take?

Answer 3:38 - Yeah what I mostly see there is a bunch of URLs that that are essentially removed for spam reasons but these are very specific URLs. So those are probably what you're seeing in search console or a sample of those that you're seeing and essentially what that just means is those specific URLs are removed and everything else is just ranking as normal. So, from our point of view it's not that you need to take care of this but we are kind of taking care of it for you in the search results already. It wouldn't negatively affect your site to do something to kind of have those removed a little bit faster but it's not something you would urgently need to do. So, the manual action that you see there you could essentially just leave that there.

Question 4:30 – Why is the message staying there then?

 

Answer 4:41 - Because they could be indexed as well. So it's something where if you were to remove them with maybe the URL removal tool it would be the same thing as if the web spam team. We're flagging them as spam because that's kind of what they were, they’re people taking advantage of the search results pages on your site. That's okay, I know that it’s obnoxious, we're kind of used to that and we try to kind of surgically remove those specific pages from the search results and if we can do that then we don't need to affect the rest of the site.

Question 5:33 – Do you recommend that XML sitemaps be left up?

Answer 5:39 – I don’t think you need that. I would just work to prevent new pages like that from being indexed and if you're currently returning 410 if you have a no index on those pages

then those will drop out of the index over time the manual the manual action that you have there is something that will also expire over time but usually that's something that expires over a longer period of time…Again, it's not something that you need to take care of because the web spam team is already kind of cutting those specific URLs out and the rest of your site wouldn't be affected.

Question 6:32 – We have a JavaScript drop down that leads to product pages. I submit the URL but it seems that google can’t crawl the drop down. I’m not sure which tool I should use to verify it with to see if the crawler can see what’s in the drop down. The drop down is in desktop and in mobile there isn’t a drop down.

Answer 7:27 - At the moment the site isn't being mobile first indexed so we will use the desktop version for crawling and indexing I could imagine depending on what all is on the site and the mobile version it might be that we would switch that to mobile versus a mixing and then that would be something that might make it harder to crawl the rest of the website but for evaluating when a website is ready for mobile first indexing we also take into account the links on the site especially the internal links so if for example those internal links are missing completely on the mobile version or you just have like a text field and type the query rather than using a drop-down then that's something where we probably wouldn't switch the site over to a mobile first index but still it seems like something that's probably worth double checking on your site to make sure that the drop-down also works on mobile and then you should be able to see that in like the mobile mobile-friendly test where if you look at the rendered HTML it should have the drop down and the links there too so my guess is at the moment it's not critical but it's definitely something I wouldn't put off for too long.

Question 9:21 – In regards to Structured Data in order for Rich Snippets to start displaying reviews instead of Vote we switched over to review count instead of rating count but this is causing a warning to display for aggregate ratings when the field is empty when they’re no reviews. How can we solve this?

Answer 9:38 - So that's kind of normal if you don't have any reviews on there then I wouldn't mark it up as such. So, my recommendation would be to just leave out that kind of structured data if you don't have contents that you can fill out there. In general, if there's a warning then that just means for those pages we wouldn't show any rich results for that. So that's also not particularly critical like we're already filtering those out but we're letting you know that your markup is kind of weird here in that you're telling us the review but at the same time you're also saying actually there are no reviews here that we can show. Which is why we're flagging that is a warning. So, in general I think it's fine to switch over to that markup but make sure that you're filling it out properly or just leaving it out when you don't have anything to fill out there.

Question 16:58 – Sitemap URLs say discovered not indexed and there’s no explanation why they’re excluded?

Answer 17:11 - Essentially for a lot of URLs we just don't index everything so that's kind of normal. In the past it was just that in the sitemaps information in search console we would show you submitted so many URLs and we indexed a smaller number of those URLs and that's completely normal. We generally don't index everything that we find. So, for the most part that could be completely normal and not something that you really need to worry about. Essentially, we're trying to recognize the relevant URLs on your website the ones that we would show in the search results and try to crawl and index those.

 

Question 19:54 – the URL inspection tool will show me a page that can’t be indexed because it has a meta robots tag that says to noindex. However, there is no tag in place so I don’t understand why.

Answer 20:07 – I took a quick look at this particular URL and from what I can tell in our systems, we last crawled or processed this URL a couple months ago. So, it might be that there’s no meta robots tag on there now but maybe there was one in the past. In general, this is something that does take a bit of time to be reprocessed, so a couple of months is kind of normal for the URL that might not be your homepage or with your primary page on a site. So, what will probably happen there is that we will recrawl and reprocess this at some point and see that there is no meta robots tag anymore and we’ll index it normally. You can also encourage us to do this a little bit faster by using the URL inspection tool and using the live test and from there, I believe, submitting it to indexing. So, that’s something that helps us to understand that a URL has changed and we should make sure that we have the most recent version so that we can reflect that in the search results. That’s kind of the direction that I would head in.

Question 21:20 – Can you include a link to the article from Barry about URLs not being indexed?

Answer 21:26 – I’m sure someone will drop that link into the comment of the post.

Question 21:30 – I noticed that my site was moved to mobile first indexing in August. In September there were major drops in organic traffic. The content is the same on both devices. What could be the possible reason.

Answer 21:47 – So, in general moving to mobile first indexing is something that happens very quickly and if there were any issues associated with the site in the mobile version, then on the one hand we would avoid moving it to mobile first indexing and on the other hand you would see those changes pretty much immediately as soon as we had it in the mobile first indexed version. So, if you’re seeing changes in September or sometime later, then those would be normal organic ranking changes as they always have. I have seen on twitter that, I think in September or august, there were some ranking changes that people were seeing so I could imagine that these are just kind of the normal core ranking algorithm changes that we always have. It wouldn’t be related to the mobile first indexing.

 

Question 22:45 – Regarding rel previous and rel next indicated in a meta tag . In this case, if paginated pages are not linked on its parent page does it impact the crawling and indexing of the paginated pages.

Answer 22:57 – Yes, if we can understand which pages belong together with rel next and rel previous. But if there are no links on the page at all then it’s really hard for us to crawl from page to page. So, using the rel next and rel previous link elements in the head of a page is a great idea to tell us how these pages are connected. But you really need to have on page normal HTML links that go between one page to the next page and maybe to view all or something like that. So, that’s really still recommended.

Question 23:37 – I run an SEO company and we actually create structured data for clients who have e-commerce websites. And so when we create the structured data we are moving over from votes to reviews. So, instead of using the rating count we’re doing the review count. Now, this has resulted in an error with the structured data that says “the aggregate rating field is recommended. Please provide and value if available”. So basically if the client doesn’t have any reviews then there is a warning. And we’re trying to figure out how to get rid of the warning because if we don’t put any aggregate rating that’ll result in an error as well. It will say that the aggregate rating is necessary.

Answer 24:58 – I would say that if there are no reviews there then I just wouldn’t use the review markup for those specific cases. I mean, if it’s a warning then it wouldn’t be critical. It’s not that we would ignore of the structured data on the site, it’s just saying that you’re supplying the review markup without all of the details that we would actually show it. At that point you just might want to remove the markup if you don’t have any of the content that you’d provide in the markup.

Question 27:05 – If you create a new page on an existing website should you put it on a new domain or on a subdomain?>

Answer 27:15 – I don’t know! That’s like for the most part, I would not worry too much about the SEO side of things in that regard. If you really need to host it separately and put it on a separate domain then sometimes that’s more of a technical question or a policy question rather than an SEO question. In general, if you set up a completely new website for content that you’re providing then that means that we have to first understand that this is a relevant website and figure out how to show it in the content of the rest of the web. But, in general, my recommendation is usually if this is the additional content that you’re providing on an existing website I would choose to include that inside your existing website instead of separating things out into smaller website. Really just build a strong single website that has a concentrated value for your business.  

Question 28:17 – How does nominal page get a benefit while it’s AMP version is ranking in the search results? Does Google treat them separately?

Answer 28:24 – So, if you have a normal HTML page and you connect an AMP page to that, you have the link rel HTML to the AMP page and the link rel canonical back to the normal web page. Then essentially what happens there is we treat that as one site. So, there’s no specific ranking benefit for having this configuration. It just really means that if we were to show you an AMP page, we would know that for this specific web page we have this AMP URL that we can show and we can use the AMP cache and serve that really quickly. All of the normal AMP things that play into that. So, it’s not that there’s any magical ranking advantage by going to an amp page or setting up that specific AMP configuration. But rather that there are multiple ways that you can use amp. And one of them is to have a separate amp URL from your kind of traditional HTML page and there’s no specific ranking advantage to doing that. It’s just a technical setup that you do.

Question 29:34 – Does Google support nested sitemap index files where one index sitemap file references another one.

Answer 29:42 – No, we don’t support that. I don’t believe anyone has supported that. I believe that is also specifically called out in the sitemap spec as something that is not supported. You generally need to set up separate sitemap index files and submit them separately if you need to go a lot further.

Question 30:05 – Clarification on pagination for an e-commerce site. What should the canonical of the end page in the series be. From what I’ve researched it should be self-canonical.

Answer 30:20 – Yes, that’s correct. So essentially the canonical should point to the version of the page that you want to have indexed.

Question 30:30 - What should the meta robots tag of the end page have, no index or index?

Answer 30:35 – That’s a bit harder. That’s essentially a question of what page do you want to have indexed in the search results. And how are the individual products on your site linked among each other. That’s something where some sites decide to say we want to have the paginated pages indexed, and that can be perfectly fine. If you have good content on those pages and we can pick those pages up and show them in the search results separately, that’s really useful to have. If you start scrolling down through the category pages on page 5, 6, 7, 8 then you start to have repetitive content across the different categories. You don’t get a lot of really useful pages out of those paginated pages and in those cases, it might make sense to just no index everything after a certain page. That way you can focus more on the actual pages that you do want to have indexed. So, there’s no really hard rule there. It’s more really a matter of making sure that the pages that you provide for indexing are pages that you want to have indexed and that you want your site to be found for.

Question 31:57 – In my industry we have annual enrollment periods. We have a top page that discusses annual enrollment period. I also publish new content every year targeting a year in the query. And we’ve done that in 2018-2019. Should I canonicalize these pages at some point? If so, what should be the top-level page or the most recent page?

Answer 32:26 – I imagine that’s always kind of tricky because on the one hand you might want to have the older content indexed as well so if people are explicitly for example: what were the requirements back in 2004? And you have content from 2004, then that would be useful to have. On the other hand, if nobody is actively looking for this older content then at some point you might as well cut that off and say okay and just point everything towards your general page. Another idea here is to say that you have one page which is just for the current version and you move all the older one’s kind of to an archive version. So, you have in this case an enrollment page for the current year which is just on a generic URL like whatever your website is /enrollment. And for the previous years, you just kind of move that over to an archived version where you have for example: enrollment/2017.  That’s generally a pretty useful strategy because that way you build up that generic enrollment page. Over the years you’ll collect more links to that page, people will see that page as being quite relevant because it’s regularly updated and always has the current version on there. We’ll still be able to find the older version if people are exclusively looking for that. But we’ll always be able to pick up the current version rather quickly. You can use this strategy any time you have something that’s regularly repeating. That could be an event such as a conference, or if you have regularly updated products. And you could have separate archive pages for all these products/events, all of the older versions. That way the generic product page will grow in value over time. And the individual versions for the older ones, they would be around if people are explicitly looking for that, we can show that in search. But they wouldn’t get in the way of the most recent version showing up fairly well, So, that’s a general strategy that you can use across different types of products and website where you have this periodic update of the content and always want to make sure that the current version is as visible as possible.

Question 34:54 – I wanted to ask your opinion on what if we use Ajax for pagination of an e-commerce site which would not have its own pagination URLs and the website relies on view all and sitemap HTML in order to help Google crawl and find all of the product pages.

Answer 35:14 - So, it’s hard to say how exactly you mean using Ajax for pagination. In general, what’s important for us, especially for e-commerce sites, is that we can find the individual product pages and that we have some idea of their context within the website. So, if all of these products are only linked from one shared sitemap HTML page, that doesn’t really give us a lot of context about these individual products. And it can make it a little bit harder for us to actually crawl all of your content. Especially if you start going from a couple hundred products to a million products. Suddenly, your sitemap HTML page is a million links to individual products which really make it hard for us to figure out where to stop looking for links on the page. Because all of the links are on one single page rather than set up in a way that you have a clear category structure or even different categories and paginated pages, kind of like sub categories. So if we can still understand the context of these pages and the link to the individual products, if you just use JavaScript for this that would be perfectly fine. On the other hand, if you use JavaScript in a way that we can’t actually crawl through and you require us to go to the sitemap HTML page, then that sounds like something that would be suboptimal. It might work for a small set of products, but it will definitely be worse for a really large set of products. So, depending on how far you want to go or how you’re planning to expand where you are now. That’d be something where I’d say it might make sense to get a clean set up rather than setting up this complicated JavaScript setup that doesn’t really work in a scalable way for a lot of site URLs. JavaScript itself is not something that would block indexing but depending on how you have the set up here, if you don’t have separate URLs for example, then we can’t really crawl that even if we could process the JavaScript on those pages.

Question 37:35 – We have a website that will be targeting many different countries. But at the moment all of the content is in English. Is it fine to use hreflang which is the specific URL for each country?

Answer 37:50 – Yes, you can do that in practice. That seems like a bit of a waste because probably what would be happening is that we index each of these different versions for individual countries and they’d all be competing with each other. Instead of having one really strong English page that would rank well globally, you start having all of these small English versions for individual countries and they’re all competing with each other for these English queries. So, my recommendations there would be to figure out where you really want to target and where you need to have separate content. And to explicitly set up hreflang versions and pages but just for those versions, not for everything you can find. So, it’s generally not a good strategy to just have a list of a hundred countries where the content is valid and just make a hundred versions of the same content. You just end up having content that’s much more diluted and has a lot harder time being shown in the search results.  So, that’d be something where you want to find a more strategic approach.”

 

Question 39:06: Is it bad for Google if there’s a microdata rfd and a JSON-LD on a product page and have the same information in the other section?

 

Answer 39:18: “So, in general that’s not bad for us if we find different ways of structured data that have the same information. But usually that means that you have to maintain separate versions of the same markup on your pages which sometimes means that things get broken and like one version suddenly shows this markup, and the other version shows slightly different markup. So my recommendation would be to stick to one version, if at all possible, if you’re transitioning between like microdata to JSON-LD then maybe you have period in between when you have both of the versions on your pages. As long as you can make sure that these two versions have the same content then that’s generally okay. But in the long run I’d really recommend making sure that you have one version of markup on your pages especially for the structured data markup and that that version is the correct version.”

 

Question 40:19: If you have a directory site of local US businesses should you select target users in the US in Search Console? Will that be a mistake on the off chance that you have a blog post that does well in Europe?

 

Answer 40:35:  “I think you could do that. So what would probably happen there is, so essentially the setting there is for geotargeting which means that when we can recognize that a user is explicitly looking for content in that country and we see that your site has selected geotargeting for that country then we can show your site a little bit higher in the search results for those specific users. It wouldn’t mean that it would be lower in the other countries it’s just more visible in the US because we see that you’re saying your site is focusing on the US, and we see that the user is explicitly looking for content in the US. If on the other hand your users are kind of just generically searching and they’re not explicitly searching in a way that you could infer that they want to have a local version, then you wouldn’t really see any change from selecting geo-targeting there. So, if your content is generic and global and your users just happen to be in the US mostly, then that’s something where you need to use geotargeting. On the other hand, if your users are all in one country and you can tell they’re explicitly looking for local content, then targeting is a good way to let us know that actually your content is really well-suited for those users.”

Question 42:01: Would it be fine to not reply or approve comments on a news post or how beneficial is it to not allow comments on news posts?

 

Answer 42:09: “That’s totally up to you. From a search point of view, what happens in general is we see this content as being a part of your page. And if this is content that you want to be found for, then have it visible. On the other hand, if it’s content that you think is not useful for your site, it’s not useful for users, it’s not something you want to be associated, with for example if it’s just spammy link drops that people are dropping into your site’s comments because they can have their script run and just drop these automatically, then that’s something where it probably makes sense to block those or remove those kind of comments. So in general, if you want to be found for something then have that content on your site. If you don’t want to be found for the content that people are placing on your site, then make sure that it’s not on your site. Ultimately it’s your website, you’re kind of responsible for what is shown on your website and we will rank your pages based on what you provide on your website. And if you provide us comments that are low-quality that are kind of problematic, then that’s what we will use for indexing. So that’s always kind of a trade-off there with regards to how much work you spin to focus on making sure that those comments are actually good with regards to approving or blocking comments and kind of maintaining all of that. And some sites just say ‘oh I don’t have time to deal with all this I’ll just block comments completely’. Ultimately that’s kind of a strategic decision on your side and not something that would kind of play into the search side directly.”

 

Question 44:00: What will be the best way to solve search results from an Australian site coming into the American search results? Currently we have international targeting set for the appropriate country as well as working hreflang to both sites. Is there anything we can do? Each one of our dealers maintains their own website and targets their own approved territories.

Answer 44:43: “Not really sure how you mean that. I think in general what can be tricky is if you have separate dealer websites that are essentially run completely independently and therefore individual countries and ideally you’d want to have these kind of linked together with hreflang to say, well, someone is searching for this product in this country, that’s the right one. And that one points to the other country versions, but that’s sometimes hard to get individual websites to actually do. And sometimes these individual websites are set up in a way that you can’t really map one to one on a URL basis between the different sites. And in cases like that, it’s really hard to say that this content on this website that’s not marked up with hreflang explicitly is equivalent to the content on another website and we should be picking out which one to show in the search results. So that can be kind of tricky sometimes. A few things also with saying, we generally crawl from the US, so if you’re doing anything fancy with geotargeting on the website itself, like redirecting users to their local country version or showing a banner for local country versions, then Googlebot went crawling from the US might trigger that. For example, if you have an Australian site and it automatically redirects US users to the US version, then Googlebot will be redirected as well which means we would have trouble indexing the Australian version. The other thing that I generally recommend doing there is if you can set that up to have a kind of a,  instead of redirecting users that are coming from the wrong country, set it up so that there’s a subtle banner on top that’s pointing users to the right version. So if you can recognize that a US user is going to the Australian site, then have it by a banner on top saying ‘okay there’s a US version of this content ,click here to got here directly’. That way we can pick that up and follow that link as well, and users when they accidentally get to the wrong version, which is always a possibility, they can still find their way to the right version.”

 

Question 46:59: Can I cut in with a quick question? I posted earlier in the chat, it’s a screenshot related with Search Console data just for an opinion on it. So basically average position is moving weekly down over the weekend. It’s like similar with traffic which is really str-, I mean not strange but you don’t see everyday average position moving like in the cycle like that and it starts overlap a lot with CTR which is also strange that CTR is also dropping a lot only over weekend and so is the average position. Is it possible that only CTR is driving an average position drop that much over the weekend or what would be an explanation for that?

 

Answer 47:48: “My guess is that the queries changed slightly. So I wouldn't expect to see like a ranking drop over the weekend. I mean I don’t know for sure but as far as I know we don't have anything in our algorithms that are like ‘oh this website is of this type therefore over the weekend we’ll rank it completely differently’. That I don't think we would have anything like that. But what I do see a lot is that there are different user patterns and that people search in different ways and some sites get a lot of traffic during the week, some sites get a lot of traffic during the kind of off hours and during the weekends. And those patterns can be quite quite visible and they affect which queries usually people are using and it might be that maybe on the weekends people are searching for something or your site isn't ranking that well. Therefore the average position is down whereas during the week they’re searching for something where your site is ranking well therefore the average position is higher.”

 

Reply 48:56: It’s true but it’s the same type of URLs, and curious just the user intent is changing over the weekend so they have a different behaviour as for as browsing like a user experience within the site and again CTR it’s really impacted over the weekend. But the query and being what else sort of more or less the same.

Answer 49:12: “I would still suspect that it’s more of like what users actually doing. I really don’t think we would have any algorithms that would try to figure out like is this a week, weekday, or weekend site and treatment you know-”

Interruption 49:29: But is it possible a CTR is driving that? So CTR is, because the CTR is lower it’s driving the rankings-”

 

Answer 49:35: “I’d suspect it’s more matter of people just searching differently. Because if they’re really searching in the same way then the clickthrough rate would be kind of the same so that’s something where I suspect if you drill down and look at the types of queries that are happening, you’d also see the number of impressions going down, you might see some queries going up more on the weekend others going down more on the weekend. So that’s something where at least from the sites that I’ve seen over time that there is a very visible effect there with regards to weekday and weekend and some are just more visible on the weekend and some are just a lot less visible on the weekends.”

 

Reply 50:17: So CTR, although it’s a ranking factor is not a powerful to really drive the query that much?

 

Answer 50:21: “I don’t know if we would call CTR a ranking factor, so from-. I really just assumed this is completely normal ranking changing. I mean not ranking changes but essentially based on what people are searching for.”

 

Question 51:00: Well talking about performance, what is the most important metric that you’re taking into account from Google’s perspective? I mean it’s the first pane is the page its what, or is it a mix of everything?

 

Answer 51:12: “The last, yeah kind of a mix of everything. So we try to take into account various lab measurements which we can kind of algorithmically determine. And we take into account various what we call the field data points that we have which are also visible in PageSpeed Insights to see like what is happening in practice. So it’s kind of a mix there, we don’t point at any specific metric that we say, ‘this is exactly what we use for ranking’ or partially because we need to be able to adjust that over time.”

 

Question 52:02: Is there any faster way to push content to Google then the Google News sitemap? I mean we work is specific with digital newspapers so they’re our most critical aspect is to push everything to Google, but is there any way to push it faster than the Google News sitemap?

Answer 52:20: “Usually the news sitemap is the fastest way so that’s something we can crawl fairly quickly because there’s a limited size and we should be able to pick that up fairly quickly after you’ve pinged the sitemap of the URL.”