Another great hangout with John Mueller. John was asked some very interesting questions about webmaster tools, social media and page pagination. Here are some handpicked questions and answers that we thought were the most compelling for SEOs. Full video and transcript can be found below! 

 

The inspect URL tool tells me my page is Mobile Friendly but received a message saying that it is not and when I send in a validation request it fails?

0:49 

In this question the site owner explains that he received a  message on Google Webmaster Tool saying that the content is wider than the screen. When testing the page he found that the site was working as intended and that in the Inspect URL it shows that it’s Mobile-Friendly. When he submitted the Validation Request the request fails.

John Mueller December 18 Webmaster Help Hangouts “Okay. Now I don't know, how what the specific page would be and where that might be coming from. What one of the things that sometimes happens is we need to be able to fetch the CSS files for these pages as well so that we can see what they look like. And sometimes we either don't have enough time to do that for your pages, or your server is a bit slow, and we don't have enough capacity to get that very quickly. And then the test might fail. So if we can't get the CSS, then it might look like the page isn't really mobile-friendly. And usually this is something that settles down over time and maybe you'll see individual pages that fall into the state. But in general, that's something I wouldn't really worry about. If the page is really mobile-friendly, if for the most part other pages on your website are seen as mobile-friendly, then that would generally be fine. It might fluctuate a little bit. You might see that as a warning in Search Console but I wouldn't worry too much about it if you're really sure that everything else is lined up properly.”


Summary: Sometimes google needs to fetch the CSS file and sometimes there's not enough time or the serve is a bit slow. It may look like the pages is not Mobile Friendly but you've checked to be sure that it is working than that's fine. Don't worry too much. Just be sure that everything is working as intended and things should settle down over time.


 

How can you check fetch and render on a page that is not your own?

4:10

John Mueller December 18 Webmaster Help Hangouts "So what I would do there is use the Mobile-Friendly Test. Uh you can use that for any page. Doesn't have to be verified in your account. And there you see exactly how Googlebot would be fetching it with the mobile device. And what you additionally see is any JavaScript errors that might be coming up. So for example, if something with the rendering is not working well because of something from the JavaScript side, which might be the case if like one element of the page is consistently missing, like the footer that you mentioned, then probably you would see that as an error in the Mobile-Friendly Test."


Summary: Using the Mobile Friendly Test you can see how Googlebot is fetching a page with a mobile device.


 

I see that the desktop site is being crawled by the Googlebot smartphone. Is that the usual behavior because it's mobile-first indexing?  

12:27

John Mueller December 18 Webmaster Help Hangouts "Yes, that's mobile-first indexing. We added that to inspect URL tool I believe maybe a few weeks ago to make it possible so that you can see which user agent is used for indexing of those pages

 


Summary: Google recently added a way for users to see if Googlebot is crawling your site via smartphone if your site has been moved to MFI (Mobile-First indexing).


 

Do you ever recommend using noindex,follow for large set of paginations with valuable content?

13:12

John Mueller December 18 Webmaster Help Hangouts “So I think what you mean is you have a list of items that link to valuable content. Because if the paginated content itself were valuable, then you wouldn't use noindex for those pages, I guess. In general, what happens here is a bit confusing and complicated, in the sense that as a first step, what we will do is we will try to follow the links on these pages when we see these pages. But in the long term, what will happen is we'll see the noindex, and think, well, you don't want this indexed, we will drop this from our index, and we will stop following the links on this page.


 

Summary: Google will follow links that lead to your paginated pages. But if Google sees the noindex Google will see that you don't want this page indexed and will eventually drop it from the index and will stop following links to those pages.


How can I include social media profiles and knowledge panel when searching for a specific brand? The wiki page is set up; logo and other details are there. But we're missing the social media profiles.

15:01

John Mueller December 18 Webmaster Help Hangouts

 

“So there is a type of structured data markup that you can put on your home page to let us know about the social media profiles. However, it's not always the case that we pick these up and we follow them. Sometimes it takes quite a while for us to double-check that things are really aligned properly, that you're linking to the right profiles, all of that. So I would recommend putting the structured data in place, but also keeping in mind that there are other factors in play here as well. So it's not always guaranteed that we'd be able to pick that up and show that right away.


Summary:  There is a type of structured data markup to let Google know about social media profiles, however it may take a while before Google can recognize and vet the accounts.


 

How many links can we have in one page and be safe without diluting content too much? Is there a formula to calculate link density based on a page related to a whole website?

15:52

John Mueller December 18 Webmaster Help Hangouts

“So I'm not aware of a link density formula. So I'm not quite sure which direction you're headed there. In general, I haven't run across pages where I've seen issues with regards to the number of links that we have on a page that we can follow. I believe at some point we had a guideline in the Webmaster Guidelines that you should stay below 1,000 links per page. And we removed that, because we found we could actually go much further than that. And at some point, when you're talking about multiple thousands of links on a page, you also have to think about how does this actually work for users? And usually that's more the bigger problem there than how search engines are able to deal with that. So if you're talking about something like a page that has between 50 and 100 links on it as a normal, average web page, then that's not something where I would worry about the number of links on a page at all.”


Summary:  John is not aware of a Link Density Formula this something you don't need to worry about.


 

I noticed one article after requesting indexing in Search Console arranged for a featured snippet at the first crawl. After 24 hours, it has gone back to page 3. What could be the reason behind that?

17:15

John Mueller December 18 Webmaster Help Hangouts

“So I guess this is both good and bad. I mean it was ranking very quickly. You had a nice featured snippet fairly quickly. But in general, what happens with a lot of our algorithms is we try to use as many signals as possible to understand where this page should be shown, where it's relevant in the search results. And if we don't have a lot of information about a page, then we have to make some assumptions. We can't easily compare a page as completely new with a lot of pages that are already existing without assuming that maybe this new page is pretty good, or maybe it's really, really good, or maybe it's not that good, because the rest of the site is also not that good. So that's something where a lot of assumptions come into play there, especially with really new content. And sometimes that works in your favor, like I guess in this case, where we thought this was actually a really good page. And after a while, we realized maybe it's not that perfect. Sometimes it works the other way around, that we start off fairly conservative and kind of low, and then, over time, we see actually we could rank this a lot better. So that's something that, from my point of view, is hard to avoid, and not really something that's easily controllable. The best thing that you can do here is really make sure that the rest of your website is really as high quality and as fantastic as possible. Because if we see that everything else on your website is really fantastic, then when we find something new, we'll say, well, probably this is pretty good, too.


Summary:  If Google doesn't have enough information about your website they have to make assumptions about your page especially if it's new content. Sometimes Google assumes that this new page is really good and ranks it well when in actuality it's quite poor and the algorithm corrects it. This can also happen in reverse where Google will be more conservative with it's assumptions and rank it low and the content is quite relevant and will slowly rank better overtime.


 

If we had a lot of pages that we don't want indexed on an e-commerce site, and we did a meta noindex but follow, does that mean we're exchanging the site's crawl budget in return for Google's understanding of the context within the products?

25:05

John Mueller December 18 Webmaster Help Hangouts

“To some extent that's the case, in the sense that we have to crawl and process these pages to see the noindex metatag. So if you really don't want us to crawl those pages, returning a 404 might be an option instead. But in general, for most websites we have no problems with the crawl budget. It's not that we're limited in any way. We can generally crawl as much as we need.”


Summary:  For most sites there is no issue in regards to Crawl Budget, Google is not limited in any way and can crawl as much as needed


 

When would be the best to use the meta noindex and to keep follow?

25:50

John Mueller December 18 Webmaster Help Hangouts “So in general I think the setup with the pagination is fine with a noindex and follow for maybe for the, I don't know, from pages 5 or 6 or whatever you want to take as a kind of a cut-off time. I think that's fine. In general, I would assume, though, that anytime you put a noindex on a page, we would actually noindex that page over the long run, and we would not use any of the links there. So you really need to make sure that any links that are critical for the crawling of your website are within indexable pages as well.”


Summary:  It is okay to set up pagination with a noindex and follow. But overtime if the page it not index Google will not follow any of the links there. So be sure if you have any links for that are critical fro crawling that they are indexed.


 

As a company established for over 20 years, what's the best approach towards links in places like forums, which now point to dead or discontinued products and pages on our website?

26:43

John Mueller December 18 Webmaster Help Hangouts

"This is kind of the natural progression of the web, in that pages come and pages go. And sometimes people link to pages that don't exist anymore. Sometimes people link to pages that never existed, where the link is maybe broken for whatever reason. That's kind of normal. What I would recommend in general, though, is every time that you're replacing something on your website, when you're moving something around within your website, where you're changing the URLs, or maybe if you have one product, then it's replaced with a new version and the old product isn't available at all anymore, then make sure that you set up a redirect, so that at least for those links that go through those pages you will guide the user to the current version of that page as well. And I think that's a good practice in general, not just for maybe cleaning up older things, but kind of in general, that when users use one URL"


Summary:  It's a good practice when you make any changes to your website like updating a product with a newer version to set up a redirect so that those links go through those pages and guide users to the current page.


 

When checking our backlinks profile, we seem to have links from random websites. For example advertisehere.org, or theglobe. And when visiting these, they seem to be directory sites. But we've never reached out to them for any kind of collaboration. Will keeping these links affect us negatively? Should I disavow them, or leave them?

30:55

John Mueller December 18 Webmaster Help Hangouts

"We have a lot of practice with these kind of links. So I would not lose any sleep over those either. That's something where a lot of these sites are just automatically generated, and they're linking to various sites. And that's pretty much ignored on our side. So that's not something that you'd need to do anything specific for. If you're really worried about some of these; if you look at them and you think, well, this could look like one of our SEOs previously bought links here, and it could look like that we're doing this on purpose, but we're really not associated with this website at all, then I would just drop the domain in the disavow file, and then move on. Having things in a disavow file is not sign that you're doing anything wrong. It's essentially just a simple way for you to say, I just don't want to be associated with these links. And from our side, that's processed algorithmically, automatically. So it's not something where the web spam team would look at that and say, oh, they've been doing something sneaky, but more like, oh, they don't want to be associated with this. Fine. That's perfectly fine.”


Summary: Google is good at ignoring links from spammy sites. But it's good practice to drop any links you don't want to be associated with in a disavow file.


 

I have one regarding a news site. So a bunch of journalists have asked me if we should have video of the coverage that you're talking about above the fold, or a big image. Because we see that more and more other news sites are using video instead of image. And I guess it makes sense, because you get more information in a video. But then it affects the site speed. And do Google prefer video over image, or no?   

39:27

John Mueller December 18 Webmaster Help Hangouts

“So in general for web search, I don't think we would care either way. So the normal textual search, I guess. It depends a little bit on how you want to be visible. So for video content, if we can pick that up, we can show the video thumbnail, which is sometimes something that users like. So if you're searching for something, and you see that there's a video available, and you want to consume the content in video format, then that's something that might make sense. An image is also pretty nice. So it's also something that we can show, for example, in the top stories carousel. I believe if you use the article markup with an image, that might also be an option to show it like that. But in general, for web search it's not that we would rank these pages differently. It's more a matter of how we would present them in the search results.”


Summary: Google doesn't prefer Video or Image for articles in terms of ranking. But depending on the mark up it changes how the content is presented in the search results.


Okay. And just a follow-up. If you would want to show the videos in the search results, should we have the videos higher up on the page, or is it the same if it's further down?  

41:00

John Mueller December 18 Webmaster Help Hangouts

“It helps us to have it such that when we look at the page, we can tell that this is more like a video landing page, rather than that the video is a random element somewhere on the page. So I'd say fairly high up. It doesn't need to be the first thing on the page. But so that when we look at that page, we really say, well, this is a video landing page, not a normal page that happens to have videos in the footer or somewhere.”


Summary: It helps Google to have videos higher up on the page to understand it as a video landing page.


 

What's your recommendation for e-commerce sites that are multiple products, should we have different URLs for each variation of color and sizes, or is it best to have a single URL with all the colors and sizes in that one URL?

41:50

John Mueller December 18 Webmaster Help Hangouts

“I don't have one answer that takes care of all of these options. In general, what you want to balance is making pages that are focused on a specific interest from a user, versus kind of creating too many pages that dilute the value of your content. So what I usually recommend is to think about whether or not these differences are more like attributes of the main product, or if they're actually different products. So when you're talking about phones, for example, the different sizes, they might be really critical for people. People would really want to see this phone in this size, and that's something that's really important for them; whereas the color might be something that's more secondary. They say, oh, well, I really want this phone in this size, and I don't know which color. I'll just pick one. On the other hand, it might also be that maybe you have one color that is really, really trendy that everyone really wants. And then you might say, well, I have this phone in, I don't know, green and yellow polka dots, because that's really modern at the moment, and you make a separate page for that version of that phone. So that's kind of the balance that you want to do there. In general, I tend towards having fewer pages, so you have pages that have a lot more value, and kind of more concentrated value, rather than too many pages, where you're kind of diluting things too much. So if you're talking about shoes, for example, like the different sizes, you would want to see that as an attribute, so that when someone searches for that kind of shoe that you have on your website, then your page will definitely show up. And if that size is not available there, that's kind of almost secondary. But at least your page would be able to rank for that.”


Summary: In general having fewer pages may have more concentrated value rather than too many pages diluting things too much. That way your page will rank for the specific product and everything else that is an attribute can be secondary.


If you like stuff like this, you'll love my newsletter!

My team and I report every week on the latest Google algorithm updates, news, and SEO tips.

Full December 14, 2018 Google Webmaster Help Hangout and Transcript

 

Question 0:49: A few days ago I saw a message on Google Webmaster Tool that one of my client’s website. They sent me an Inspect URL and told me that content is wider than the screen. So it's not mobile-friendly. So I checked the page, I found it's working properly in my mobile phone and I had checked on other mobile devices. It worked fine. Then I used Google Webmaster Tool Inspect URL option to check whether page is mobile-friendly or not. It showed me that the page is mobile-friendly. It did not find any issue. So then I submit a request to validate the issue of issue. Then today I got a message. It told me that my request has failed. There is still an issue on the page. But when I check again, the page using Inspect URL option, it shows me the page is mobile-friendly. I’m a bit confused about it.

Answer 1:51: “Okay. Now I don't know, how what the specific page would be and where that might be coming from. What one of the things that sometimes happens is we need to be able to fetch the CSS files for these pages as well so that we can see what they look like. And sometimes we either don't have enough time to do that for your pages, or your server is a bit slow, and we don't have enough capacity to get that very quickly. And then the test might fail. So if we can't get the CSS, then it might look like the page isn't really mobile-friendly. And usually this is something that settles down over time and maybe you'll see individual pages that fall into the state. But in general, that's something I wouldn't really worry about. If the page is really mobile-friendly, if for the most part other pages on your website are seen as mobile-friendly, then that would generally be fine. It might fluctuate a little bit. You might see that as a warning in Search Console but I wouldn't worry too much about it if you're really sure that everything else is lined up properly.”

Question 3:18: I talked about my website previously. So we have a JavaScript-based site and we've recently implemented Rendertron. Um it's funny because I see in the old Webmaster tools the Fetch and Render are still having problems displaying what the site looks like to Google, and also what the Googlebot is saying that customer’s seeing. But all the code is rendering fine especially in the new Search Console, it’s showing that all of it is being read. So I'm kind of a little bit confused here, whether it's reading the site properly, or whether we’re missing something.”

Answer 4:04: “Okay. And how much of the page would would be missing when when you check it with Fetch and Render?”

Question 4:10: “Pretty much just the footer as in visually rent on the Render It's showing the footer missing on pretty much all the pages and the CSS kind of putting the styling a bit odd. But yet in the new version of Search Console it's showing that the pages are all mobile-friendly. We're getting all their green ticks and that the HTML code is visible right down to the footer

Answer 4:34: “Okay. So what I would do there is use the Mobile-Friendly Test. Uh you can use that for any page. Doesn't have to be verified in your account. And there you see exactly how Googlebot would be fetching it with the mobile device. And what you additionally see is any JavaScript errors that might be coming up. So for example, if something with the rendering is not working well because of something from the JavaScript side, which might be the case if like one element of the page is consistently missing, like the footer that you mentioned, then probably you would see that as an error in the Mobile-Friendly Test. I believe it's a separate tab that you have to click on to get more information on the errors. But there you would see like this JavaScript file had this issue, where we ran into this problem with this part of the page. And from that you can kind of narrow down where exactly the problem is. And you can also think about is this a critical problem for me or not. So for example, if you had ads on the page, and the ads just didn't render because of some JavaScript problem, you don't really care about that. If it's something in the footer,and for the most part the content is actually there, like your address, your phone number, then maybe that's not too critical. But at least you can find out more information there in the Mobile-Friendly Test.”

Question 6:08: John, this is the site that you looked at after last week's call when we stayed on for a few minutes afterwards, where you said you were looking at the event pages, and only the calendar in the title were loading for you if you remember.  

Answer 6:24: “Okay. Yeah. In that case, what would probably also make sense is to make sure that you look at a representative sample of the pages. So not just the home page, or the main category page, but, like Rob mentioned, something like the event pages, some of the lower-level pages across the site, so that you know that the different templates and layouts that you have, that they all work with regards to rendering on Google. And if you're using Rendertron, then for the most part that should just work out. But even there it might be that there is something quirky happening that Rendertron can't render other pages properly. So that's always good to double check. What you can also do, if you're rendering the content specifically for Googlebot, you can change the user agent in your browser to Googlebot, and see what your server would be returning for that. So that's kind of a simple way to double-check what Rendertron is producing for your pages.”

Question 7:33: Okay. And is that what you suggest doing in the feature if the old version of Webmasters is discontinued. Because if you can't see the Render on the new version, how are you supposed to see, I guess that [INAUDIBLE] is working visually?  

Answer 7:50: “Yeah, I think for the most part you can do this in the old and the new version. I think the errors, especially around JavaScript, they're all in the Mobile-Friendly Test, which is kind of in between the old and new versions. But for the most part, you should be able to see that from both of those versions.”

Question 8:12: Okay. And in terms of Rendertron, do you think long-term that's a good solution for us, or should we consider switching to client-side [INAUDIBLE]?

Answer 8:24: “I don't know. I don't know. I think, especially in a case like yours, where I assume you have a lot of content that comes and goes-- so things that don't remain persistent on the website for a longer period of time-- then I would certainly look into server-side rendering, or dynamic rendering like this so that you can serve a static HTML version of these pages so that you don't have to wait for things to be rendered on Google's side. If you had a website where the content is mostly stable-- maybe a blog where every now and then you publish something new, then client-side rendering would be fine there, because if it takes a couple days longer for the content to be rendered and indexed like that, that doesn't really make a big difference. But if you really have content that comes and goes fairly quickly, then I would make sure that you serve a static HTML version.”

Question 9:25: I manage the SEO for an in-house brand. This question is about mobile site indexing. We have a separate mobile site and a desktop site. And when I see that the desktop site Google Search Console, I have a few headers which come under the category of duplicate canonical, and Google choose an alternate canonical. And when I see the desktop URL, it's choosing the m site as the canonical for that though we have mentioned the desktop as the canonical. I've also sent a snapshot in the webmaster's questions, if you see that, question number four? It has a screenshot of that. I just wanted to know if this is normal, or are we missing something on that?   

Answer 10:19: ”Let me take a quick look at the snapshot [brief pause]. Yeah. OK. So for the most part, I don't see that as a problem. So if you have a separate mobile site with separate mobile URLs and a separate desktop site, then what can happen is that we pick one or the other as the canonical for the page, and we index the page like that. Especially in your case it looks like we're using mobile-first indexing for your web pages. You can see that with the last crawled as for the page that you showed there in the inspect URL tool. So I would imagine that we would be indexing the mobile version of the page anyway, and it's possible that we would show that as a canonical of the mobile page in the Inspect URL tools. But in general, it's not something that would cause any problems either way, as long as the content is the same on the desktop and the mobile version. A really simple way to check is just to take some text from the page and search for that, and see, is this page showing up at all in search? You can double-check on a desktop and a mobile device to see that Google is able to recognize the desktop URL and the mobile URL. So we would pick one URL as a canonical, but we would swap out the URLs and show them separately depending on the device.”

Question 11:55: Got it. And this would not impact the desktop results, or the performance in any way? Is that right to say?  

Answer 12:03: “So with mobile-first indexing, we only use the mobile version of your page as the canonical, as the version for indexing. So we would, for desktop search results, we would also use the mobile version as the version for indexing. We would show the desktop URL, if we can understand that connection between the two. But we would only use a mobile version for indexing.”

Question 12:27: Got it, John. And the last question on this is in the snapshot, I see that the desktop site is being crawled by the Googlebot smartphone. Is that the usual behavior because it's mobile-first indexing?  

Answer 12:40: “Yes, that's mobile-first indexing. We added that to inspect URL tool I believe maybe a few weeks ago to make it possible so that you can see which user agent is used for indexing of those pages.”

Question 13:12: Do you ever recommend using noindex,follow for large set of paginations with valuable content?

Answer 13:21: “So I think what you mean is you have a list of items that link to valuable content. Because if the paginated content itself were valuable, then you wouldn't use noindex for those pages, I guess. In general, what happens here is a bit confusing and complicated, in the sense that as a first step, what we will do is we will try to follow the links on these pages when we see these pages. But in the long term, what will happen is we'll see the noindex, and think, well, you don't want this indexed, we will drop this from our index, and we will stop following the links on this page. So for the most part, this is a practice that lots of sites use. And we can deal with it fairly well. So it's not something that I think sites need to change or anything. It's just it's worth understanding that if you have a noindex on your page for the long term, then it's very possible that Google systems will think, well, we don't need to index this page at all. So therefore we don't need to follow the links on this page either. So if there is anything really critical on your site that you are linking within your website, make sure that it's also linked from all the pages on your website, so that we can really follow through from an index page with a link to the other pages on your website. So if you have products, for example, if you have articles, or anything on your website, make sure that there is a way to reach those products through a normal index page as well.”

Question 15:01: How can I include social media profiles and knowledge panel when searching for a specific brand? The wiki page is set up; logo and other details are there. But we're missing the social media profiles.

Answer 15:13: “So there is a type of structured data markup that you can put on your home page to let us know about the social media profiles. However, it's not always the case that we pick these up and we follow them. Sometimes it takes quite a while for us to double-check that things are really aligned properly, that you're linking to the right profiles, all of that. So I would recommend putting the structured data in place, but also keeping in mind that there are other factors in play here as well. So it's not always guaranteed that we'd be able to pick that up and show that right away.”

Question 15:52: How many links can we have in one page and be safe without diluting content too much? Is there a formula to calculate link density based on a page related to a whole website?

Answer 16:05: “So I'm not aware of a link density formula. So I'm not quite sure which direction you're headed there. In general, I haven't run across pages where I've seen issues with regards to the number of links that we have on a page that we can follow. I believe at some point we had a guideline in the Webmaster Guidelines that you should stay below 1,000 links per page. And we removed that, because we found we could actually go much further than that. And at some point, when you're talking about multiple thousands of links on a page, you also have to think about how does this actually work for users? And usually that's more the bigger problem there than how search engines are able to deal with that. So if you're talking about something like a page that has between 50 and 100 links on it as a normal, average web page, then that's not something where I would worry about the number of links on a page at all.”

Question 17:15: I noticed one article after requesting indexing in Search Console arranged for a featured snippet at the first crawl. After 24 hours, it has gone back to page 3. What could be the reason behind that?  

Answer 17:28: “So I guess this is both good and bad. I mean it was ranking very quickly. You had a nice featured snippet fairly quickly. But in general, what happens with a lot of our algorithms is we try to use as many signals as possible to understand where this page should be shown, where it's relevant in the search results. And if we don't have a lot of information about a page, then we have to make some assumptions. We can't easily compare a page as completely new with a lot of pages that are already existing without assuming that maybe this new page is pretty good, or maybe it's really, really good, or maybe it's not that good, because the rest of the site is also not that good. So that's something where a lot of assumptions come into play there, especially with really new content. And sometimes that works in your favor, like I guess in this case, where we thought this was actually a really good page. And after a while, we realized maybe it's not that perfect. Sometimes it works the other way around, that we start off fairly conservative and kind of low, and then, over time, we see actually we could rank this a lot better. So that's something that, from my point of view, is hard to avoid, and not really something that's easily controllable. The best thing that you can do here is really make sure that the rest of your website is really as high quality and as fantastic as possible. Because if we see that everything else on your website is really fantastic, then when we find something new, we'll say, well, probably this is pretty good, too. And we'll make some assumptions that kind of go in the direction of, well, maybe we should show this more visibly in search, and figure out how high we should really show it over time. So that's probably what's happening there. And that's more the direction I would head there if you're seeing this kind of a difference.”

Question 19:27: Search Console tells me everything is fine, but when you spoke with Rob last week-- oh, this is, I guess, the site that we talked about before. If we have a bit of time afterwards, I can take a look at that as well. And we can see if we can find anything more that's kind of hidden away there. I often see sites like this. They have no real content on the page, but a lot of internal links to other pages. I have made good content, yet pages like this beat me, a lot with no content on other keywords. Why do some pages rank a lot of the time with little content or no content? Is it because they have a lot of links in a page, or is it something else? Will internal pages help the main page that's linking to help rank the main page?  

Answer 20:17: “So I don't know about the specific site, where it's ranking, or what it's ranking for, or what your site is ranking for, or trying to rank for. But in general, we use over 200 factors for crawling, indexing, and ranking. So there are a lot of things in play there. And sometimes sites are really good in one part of these factors, and sometimes sites are really good in other parts of these factors. And it's not the case that all sites have to be good equally in all of these different parts, but rather we try to balance that out. And we see, well, for this query, we're seeing, well, this page is really good in this regard, but really kind of mediocre or bad in that regard. So maybe we'll take a medium value here, or something like that. So it's really kind of the case that you don't have to do everything the same as your competitors are doing. You can be really fantastic in ways that are unique to your own site, and that still give you kind of an ability to show up in the rankings as well. So if you're seeing your competitors doing something that's kind of bad, or that's awkward, or that looks like it's not really that smart, then take that as a sign that you can actually do better. And maybe there are things that you can do on your site that help you to kind of jump over these competitors while they do things where they're essentially ranking, despite doing these weird things as well.”

Question 21:55: How would one know their website's crawl budget? Is it the crawl rate in the download graph in Search Console?

Answer 22:05: “So that's, I guess that's a tricky one, because we don't have that information at all externally. So you don't really know what your crawl budget is. And it's also something that changes quite a bit over time. Our algorithms are very dynamic, and they try to react fairly quickly to changes that you make on your website. So for example, if you install a new CMS, and you don't set it up properly, there is no caching, and it's really slow, then Googlebot, when it crawls those pages, will see, oh, this is really slow, and fairly quickly, over the next couple of days, usually Googlebot will slow down crawling and say, well, we have to be careful here that we don't overload the server. And similarly, if you improve your website-- if you move to a CDN where you can serve everything very quickly for example, then Googlebot will notice that. And when Googlebot thinks that it would like to crawl a lot more pages from your website, it will recognize, oh, crawling faster on this website now is actually a lot easier. So we can crawl more. So that's something that's very dynamic. It's not something that is assigned one time to a website. And the other aspect here is also it's based a lot on the actual demand from our side. So we try to think about how much crawling actually makes sense for this website. And that kind of changes how much we would like to crawl from a website anyway. So it's not that there's this one number that we have for your website, and it never changes, you can go that far but not further. It's really quite dynamic. One thing I would recommend doing here, though, is looking at the crawl rate statistics in Search Console, and just kind of comparing them to other websites that you have in your account, and seeing is this kind of a reasonable number here or not-- specifically the average time it takes to download a page, I believe. I'm not sure what it's called exactly in Search Console. But the average time per page, which is not the time it takes to render the page, but rather to download an HTML file from your server, or a CSS file, or whatever files that you have on your server on average. And that's something where probably you should try to aim for something reasonable so that we can fetch these pages fairly quickly. And if that time is pretty high-- say we are taking multiple seconds to fetch individual files from your server-- then that's generally a sign that you could speed up your server, and we would probably be able to crawl a little bit more. The other important part here, though, is more crawling does not mean better ranking. So just because we can crawl more doesn't mean Google will rank it better.”

Question continued 25:05: If we had a lot of pages that we don't want indexed on an e-commerce site, and we did a meta noindex but follow, does that mean we're exchanging the site's crawl budget in return for Google's understanding of the context within the products?

Answer 25:23: “To some extent that's the case, in the sense that we have to crawl and process these pages to see the noindex metatag. So if you really don't want us to crawl those pages, returning a 404 might be an option instead. But in general, for most websites we have no problems with the crawl budget. It's not that we're limited in any way. We can generally crawl as much as we need.”

Question 25:50: When would be the best to use the meta noindex and to keep follow?

Answer 25:57: “So in general I think the setup with the pagination is fine with a noindex and follow for maybe for the, I don't know, from pages 5 or 6 or whatever you want to take as a kind of a cut-off time. I think that's fine. In general, I would assume, though, that anytime you put a noindex on a page, we would actually noindex that page over the long run, and we would not use any of the links there. So you really need to make sure that any links that are critical for the crawling of your website are within indexable pages as well.”

Question 26:43: Think we looked at this with separate mobile pages. As a company established for over 20 years, what's the best approach towards links in places like forums, which now point to dead or discontinued products and pages on our website? When reviewing our backlink profile, I found links that are now dead that users shared over eight years ago. Initially, I would think to redirect. But then again, isn't the natural progression and growth of the web that pages come and go over time, and they should be dead? Should webmasters do something about this? Will there be any negative impact if we don't?

Answer 27:23: “So you're right. This is kind of the natural progression of the web, in that pages come and pages go. And sometimes people link to pages that don't exist anymore. Sometimes people link to pages that never existed, where the link is maybe broken for whatever reason. That's kind of normal. What I would recommend in general, though, is every time that you're replacing something on your website, when you're moving something around within your website, where you're changing the URLs, or maybe if you have one product, then it's replaced with a new version and the old product isn't available at all anymore, then make sure that you set up a redirect, so that at least for those links that go through those pages you will guide the user to the current version of that page as well. And I think that's a good practice in general, not just for maybe cleaning up older things, but kind of in general, that when users use one URL from your website that they're able to get content from your website. It's a bit trickier when you're looking at e-commerce sites and products that maybe used to exist but don't exist anymore. It's not always that you have a replacement available that is one-to-one kind of a copy of the old thing. And in those cases, a 404 pages is also perfectly fine. In general, when you're talking about an older web site, and looking at links-- especially from places like forums-- it's also worth keeping in mind that you kind of have to be, I guess, reasonable about the amount of effort that you put into this, in the sense that for the most part we probably have a lot of really useful and relevant links to your pages already, especially to your home page. That's something that's probably collected over time. And you will always find these kind of leftover, bad, broken links around the web. And you don't have to clean all of those up. If a website has 404 pages, and we find links to pages that return 404, that's completely normal. That's not something that you need to change, or that you need to fix in any way. Sometimes there are small things that you can do to fix these issues. For example, if you see that people always link to the uppercase version of your URLs because that used to work, and now it doesn't work anymore, then maybe setting up a redirect for upper/lower-case differences, that might be a small win that you can kind of do fairly quickly. On the other hand, if you find hundreds of links to old, discontinued products, or to an old version of your website that doesn't exist anymore, and you can't really map that one-to-one to anything new on your website, then I don't know. Those, at some point, you just have to say, well, it's not worth the effort to try to clean that up. And usually that's also something that users realize. If you go to a forum thread that's 10 years old, and you click on some of those links, and they go to 404 pages, it happens. And if it's something that's really relevant for users anyway, then someone will go into a forum and post a new link as well that kind of points to the new version. So I wouldn't lose that much sleep over it.”

Question 30:55: When checking our backlinks profile, we seem to have links from random websites. For example advertisehere.org, or theglobe. And when visiting these, they seem to be directory sites. But we've never reached out to them for any kind of collaboration. Will keeping these links affect us negatively? Should I disavow them, or leave them?

Answer 31:15: We have a lot of practice with these kind of links. So I would not lose any sleep over those either. That's something where a lot of these sites are just automatically generated, and they're linking to various sites. And that's pretty much ignored on our side. So that's not something that you'd need to do anything specific for. If you're really worried about some of these; if you look at them and you think, well, this could look like one of our SEOs previously bought links here, and it could look like that we're doing this on purpose, but we're really not associated with this website at all, then I would just drop the domain in the disavow file, and then move on. Having things in a disavow file is not sign that you're doing anything wrong. It's essentially just a simple way for you to say, I just don't want to be associated with these links. And from our side, that's processed algorithmically, automatically. So it's not something where the web spam team would look at that and say, oh, they've been doing something sneaky, but more like, oh, they don't want to be associated with this. Fine. That's perfectly fine.”

Question 32:29: A question related to SEO and WordPress. My WordPress setup has schema and OG settings that have a last updated date. This last updated date gets updated anytime the Update button is clicked in WordPress. However, my content is updated via PHP, not WordPress. So if my content changes, the last updated date remains unchanged. Is this a problem for SEO, or does Google see that the content was updated during the crawl and doesn't care that much?

Answer 33:04: “So I'm not really sure how you have this set up on your website, because WordPress is also using PHP. So it's kind of similar, I guess. In general, the last updated date that you have in the metatags, I believe with structured data you can also specify that. That's something that helps us to recognize when this article or this page was last updated. So for the most part I will try to either get that right, or just not provide that at all. So if you have that in your template, in your WordPress template, and you're not using that to give us information about the actual page's content, then I'd recommend maybe taking that out of the template so that we don't have these kind of conflicting signals. If you are using PHP to update your content, then I would think about ways that you can also maybe provide the last update date with structured data directly within the content that you're generating with PHP as well. That makes it a little bit easier for us there as well. The last modified date within the content or within the structured data helps us primarily to understand when the page was updated. So things like having a date in the snippet is a lot easier for us to actually do, then, especially if the date in the structured data matches a visible date on a page. If you have a last modified date on the page, that helps us there. The other last modified date that also kind of plays into all of the SEO side is the one in the sitemap file. So if you have an XML sitemap file, I don't know if WordPress generates those by default now. But there are some really good plugins out there that make XML sitemap files. Within a sitemap file, there's also the last modified date. And that's the date that we use to understand which pages we have to recrawl. So in particular, if you have a bigger website, and you make some changes on some of the lower-level pages, we'd have to kind of recrawl the rest of the website to realize that these pages have changed. And with the last modified date in the sitemap file, we will know fairly quickly, oh, these five pages have recently changed. We can go there directly and crawl them fairly quickly. So in the sitemap file, the last modified date is more for helping us to crawl quickly. And within the page, the modified date in structured data, for example, is something that helps us to better understand what date we should associate with the search result entry.”

Question 36:01: I'm working with a flank company, and there's a redirect issue flagged by Search Console. When I looked at the chain, I noticed two things. Due to their single sign-on feature, we've noticed there are more than five redirects per URL. The redirect change starts with the main URL, goes through a single sign-on chain, and back to the main URL-- kind of a redirect loop. Since single sign-on is a business need, one solution was to identify Googlebot through a server log, and disable single sign-on to remove the chain for the bot, but not users. We feel a loop which, not a redirect train. Don't understand that last part.

Answer 36:45: “In general I could imagine that this might be kind of tricky for us, in particular if we crawl one URL, and then we get redirected somewhere else, and then we get redirected back to the original URL. Then our systems might assume that that initial redirect would still apply. So that could prevent us from actually crawling those pages. The other aspect here is that a lot of times the sign-on pages or interstitials that you have use cookies to pass on the credentials. And Googlebot doesn't use cookies. So when we crawl, and we get a redirect to a different page, and a cookie is dropped for the user, Googlebot would not take up that cookie. And if we crawl again, and that cookie is needed to actually view the content, then Googlebot would not have that cookie. Users would have that cookie when they go there, but Googlebot wouldn't. So we wouldn't be able to crawl those pages at all. So depending on how you have this set up on your pages, one approach might be to look into something like flexible sampling, or what used to be called ‘first click free’, in that what you could do there is make it so that Googlebot is able to access the content directly. Mark it up with structured data so that we know that there is this kind of sign-in flow associated with that as well. And with the flexible sampling setup, you can show this kind of a sign-on interstitial as well for this kind of content. So that's kind of the direction I would head there. I wouldn't just blindly say Googlebot never has to go through this path. But I'd try to look into an approach that makes sense with regards to our guidelines, and still makes it possible for Google to get to this content. A simple way to double-check if we are able to kind of follow through this kind of redirect chain is to use a Mobile-Friendly Test. It uses a Googlebot user agent, and generally follows redirects the same way that a Googlebot would. So you should be able to see if Google is able to make it past the single sign-on redirect, or if Googlebot would get stuck in that redirect at the moment.”

Question 39:27: I have one regarding a news site. So a bunch of journalists have asked me if we should have video of the coverage that you're talking about above the fold, or a big image. Because we see that more and more other news sites are using video instead of image. And I guess it makes sense, because you get more information in a video. But then it affects the site speed. And do Google prefer video over image, or no?   

Answer 40:00: “So in general for web search, I don't think we would care either way. So the normal textual search, I guess. It depends a little bit on how you want to be visible. So for video content, if we can pick that up, we can show the video thumbnail, which is sometimes something that users like. So if you're searching for something, and you see that there's a video available, and you want to consume the content in video format, then that's something that might make sense. An image is also pretty nice. So it's also something that we can show, for example, in the top stories carousel. I believe if you use the article markup with an image, that might also be an option to show it like that. But in general, for web search it's not that we would rank these pages differently. It's more a matter of how we would present them in the search results.”

Question 41:00: Okay. And just a follow-up. If you would want to show the videos in the search results, should we have the videos higher up on the page, or is it the same if it's further down?  

Answer 41:13: “It helps us to have it such that when we look at the page, we can tell that this is more like a video landing page, rather than that the video is a random element somewhere on the page. So I'd say fairly high up. It doesn't need to be the first thing on the page. But so that when we look at that page, we really say, well, this is a video landing page, not a normal page that happens to have videos in the footer or somewhere.”

Question 41:50: Just a question on the e-commerce sites. For e-commerce sites that are multiple products, and say, for example, that a single product has multiple variations in colors and sizes-- for example iPhone X and iPhone X Gold and Silver-- is it fair to have different URLs for each violation in color, or is it best to have a single URL with all the colors and sizes in that one URL, or just canonicalize the versions? What's your recommendation on that?  

Answer 42:24: “I don't have one answer that takes care of all of these options. In general, what you want to balance is making pages that are focused on a specific interest from a user, versus kind of creating too many pages that dilute the value of your content. So what I usually recommend is to think about whether or not these differences are more like attributes of the main product, or if they're actually different products. So when you're talking about phones, for example, the different sizes, they might be really critical for people. People would really want to see this phone in this size, and that's something that's really important for them; whereas the color might be something that's more secondary. They say, oh, well, I really want this phone in this size, and I don't know which color. I'll just pick one. On the other hand, it might also be that maybe you have one color that is really, really trendy that everyone really wants. And then you might say, well, I have this phone in, I don't know, green and yellow polka dots, because that's really modern at the moment, and you make a separate page for that version of that phone. So that's kind of the balance that you want to do there. In general, I tend towards having fewer pages, so you have pages that have a lot more value, and kind of more concentrated value, rather than too many pages, where you're kind of diluting things too much. So if you're talking about shoes, for example, like the different sizes, you would want to see that as an attribute, so that when someone searches for that kind of shoe that you have on your website, then your page will definitely show up. And if that size is not available there, that's kind of almost secondary. But at least your page would be able to rank for that.”

Question 44:54 (Revisiting the site from earlier): John: Shall we take a quick look at the other site that we had before. What was that? With regards to rendering, I guess. What kind of pages were you looking at there?

Question 45:17: Well, I think originally it was that the title was loading, and that was it. But I can't seem to replicate that on my end. [INAUDIBLE] pages get [INAUDIBLE], it's more visible.   

Answer 45:44: It's really hard to hear you. [another guest puts the URL in question into the chat for John]

Answer 45:42: “Okay. All right. So were you mostly worried about the home page, or individual pages? Were you seeing any issues with regards to which ones were visible in search?

Question 46:03: Yeah, well I mean a month ago we were pretty much not visible in search. We have come a long way since then. There is over 1,000 pages now indexed. But we have about 6,000 pages on our site. So we're getting there slowly. But I just want to make sure that the home page is getting read. And I think that we might still have some issues with page load speeds. And I guess what we can do in the short term before we move to server-side rendering to improve [INAUDIBLE].  Because if that's the way that we're heading, then we'll end up doing that eventually. But in the meantime, what can it see? Because I find it a little bit tricky right now with how it's Webmaster Tools. It's that, like I said before, the footer's not really showing, but it's also showing that the user can't see anything at all. So that's quite interesting.

Answer 47:10: “Okay. So I guess if I use a Mobile-Friendly Test for the home page, I see some content. But it feels like it's not everything there. So like the city are not loading. So to me that's kind of a sign that something is getting stuck. So if you look at the page loading information in the Mobile-Friendly Test, then it shows a bunch of URLs that seem to have some kind of redirection error. They look like URLs that are more around the images that are actually used. So I wonder if there is something maybe with the way that you're storing or providing the images that doesn't work for Google. In general, that wouldn't affect the web search side. But it would mean that we wouldn't be able to pick up these images for image search. So if there is something in those images where you're saying, well, we really need to have these images indexed, because people search in an individual way for these kind of products, or these kind of activities, then I would try to resolve that. So that's I guess on the home page. What we can do, or what I usually do is I try to click through to some of the lower-level pages, maybe take some of the category pages, which are more like the city pages, I guess, in your case, and double-check those. So let me just take one of the pages, if I can copy and paste that properly.  Let's see. So I'm looking at one of the city pages. And it takes a while to run. So let's see what happens there. And also what I guess you could also do is take one of the individual events that you have there, and do the same. So I'm basically just looking in the Mobile-Friendly Test to see what shows up there, which is kind of boring to watch, because I guess you don't see what I'm doing. Let's see. So looking at one of the city pages, I don't see any of the content actually loading there. So that feels kind of weird. It might be that it's just below the fold somewhere.”

Question 52:21: Can you see the title and the name?  

Answer 42:24: “Yeah. Yeah. I see, in this case, Melbourne Experiences, and then some text, then some background image.”

Question 50:38: I guess everything below that, it's now tiles that are experiences that we've pulled. So it seems like we're having maybe an image issue.  

Answer 50:50: “Yeah. Let me also just double-check to see if your site is being mobile-first indexed. Because then, if we're using mobile-first index for that, then that's pretty clearly a sign that we should focus on the Mobile-Friendly Test, and just use that one for diagnosing. So that seems to be the case, that you're in the mobile-first index here. Let's see. So, dum-dum-dum-dum. All of these things take so long to load. Sorry. So let's see. Looking at the internal tools I have, it does seem to be the case, at least for the city pages, we're not seeing any of the content actually load. So let me see if--”

Question 52:02: Is that in the code, or just in the screenshot?  

Answer 52:06: “I think that would be from your side. So--”

Question 52:16: Like when you switch from screenshot to HTML in the Mobile-Friendly Test?  

Answer 52:21: “Yeah. Let me look at the HTML version. But that's always a hassle to kind of figure out what's actually there without cutting and pasting it out.”

Question 52:38: Yeah, well, I think I can see a Facebook link in here. So it seems like it is reading to the bottom of the page.   

Answer 52:50: “Yeah. Okay. Maybe we can take a quick break here. I'll stop the Hangout, and then we can dig in a little bit more, because it's probably not too interesting to have random people on YouTube watch us try to figure out what's happening with your pages.”