This week John is in the Big Apple, NYC. Joined IRL, from left to right, by Martin Splitt from the Java Script team at Google, Chris Love, Lily Ray, Marie Haynes, Dave Minchala and Quason Carter. This week there were some great questions about Cloudfare, The QRG, and The New Page Speed Tool. Link to video and full transcript can be found below!

 

Does Cloudflare sometimes block Googlebot?

7:58

John Mueller January 22 2019"Yeah, I don't know how that's setup with Cloudflare at the moment, but I know in the past they used to block people that are faking Googlebot. So if you use, like, your own, I don't know, Screaming Frog or something-- you say, I'm using Googlebot user agent, then they would block that because they could tell, like, this is not a legitimate Googlebot. We can block that.  But for the most part, I think they have enough practice to recognize normal Googlebots and to let that crawl normally."

 


Summary: Sometimes, when testing it may look like Cloudflare is blocking Googlebot. What is likely happening here is that Cloudflare is blocking people who are pretending to be Googlebot. So, if you use a tool like Screaming Frog and choose Googlebot as your user agent, you may not be able to crawl sites using Cloudflare.


Can unnatural links still hurt a site algorithmically? If so, can use of the disavow tool help?

14:01

John Mueller January 22 2019"I think that was a good question. So from my point of view, what I would look at there is, on the one hand, definitely the case is where there is a manual action. But also, the cases where you also want to be seeing a lot of manual actions would say, well, if the web spam team looked at this now, they would give you a manual action. Kind of the cases where you'd say, well, the manual action is more a matter of time and not kind of like it's based on something that was done-- I don't know-- where it's clearly done a couple of years ago, and it was kind of borderline not. But the kind of stuff where you look at it and say, if someone from the web spam team kind of got this as a tip, they would take a manual action, and that's definitely the kind of thing where I would clean that up and do like a disavow for that. Yeah, I think it's hard to say if there is like a specific timeline. But in general, if the webspam team looked at this and said, like, things have moved on. This was clearly done a couple of years ago. It was not totally malicious. Then they probably wouldn't take manual action for that. "

And I'm assuming you probably can't answer this, but is there any way that-- like, so say we didn't get a manual action, or they didn't get a manual action. Can those links hurt them algorithmically? Because we feel like we're seeing some improvements in some sites, you know, after disavowing. So, again, I know it's always-- it's never black and white.  

That can definitely be the case. So it's something where our algorithms when we look at it and they see, oh, they're a bunch of really bad links here, then maybe they'll be a bit more cautious with regards to the links in general for the website. So if you clean that up, then the algorithms look at it and say, oh, there's-- there's kind of-- it's OK. It's not bad.  


Summary: If you have links that were specifically made just for SEO, and you have a lot of them, they can cause Google’s algorithms to distrust all of your links. Best to disavow for cases like this.


How do Google’s algorithms measure E-A-T of publishers?

33:40

John Mueller January 22 2019"I don't know. I think that's probably hard to kind of figure out algorithmically. And if there were any kind of technical things that you should do, then we would let you know. So if there are things like authorship markup that we had at some points that we think would be useful for something like this, we would definitely bring that out there. But a lot of things are really more kind of soft quality factors that we try to figure out, and it's not something technical that you're either doing or not doing. It's more, like, trying to figure it out how a user might look at this. So not anything specific that I could point at. "

 


Summary: There are a lot of “soft quality factors” that Google looks at. Look at things from a user’s perspective. Also, authorship markup may help Google understand things better.


If something is in the Quality Raters’ Guidelines, is it reasonable to assume that Google wants this to be reflected in their algorithms?

34:44

John Mueller January 22 2019 I think, in general, it's probably good practice to aim for that. I avoid trying to focus too much on what Google might use as an algorithmic factor and look at it more as-- we think this is good for the web, and, therefore, we will try to kind of go in that direction and do these kind of things. So not so much it's like I'm making a good website just so that I can rank better, but I'm making a good website because when I do show up in search, I want people to have a good experience. And then they'll come back to my website, and maybe they'll buy something. So that's kind of the direction I would see that, not as, like, do this in order to rank, but do this in order to kind of have a healthy, long-term relationship on the web.  

 


Summary: Think of what’s best for users. Even though not everything that is in the QRG is currently reflected in Google’s algorithms, these are all great steps towards improving quality overall.


Is there schema to help sites appear higher for voice search results?

36:10

John Mueller January 22 2019 I don't know. I can't think of anything offhand. So there is the speakable markup that you can use, which is probably reasonable to-- kind of look into to see where it could make sense on a page. I don't think we'll use that in all locations yet.

 


Summary: Speakable markup is not used in all geographic locations just yet. But, this is a good place to start.


Some tips on winning featured snippets

38:03

John Mueller January 22 2019But featured snippets, in particular, I don't think we have any type of markup that's specific to that. So that's something where if you have clear kind of structure on the page, that helps us a lot. If we can recognize like tables on a page, then we can pull that out a lot easier. Whereas if you use fancy HTML and CSS to make it look like a table, but it's not actually a table, then that's a lot harder for us to pull out.  

 


Summary: There is no markup you can add to win a featured snippet. But, good use of h tags and normal HTML tables can really help.


Should location schema be added to all pages?

50:47

John Mueller January 22 2019

Answer 51:42 - As far as I know, it's just the home page. I don't know. Do you any of you know?  

Answer from Liz 51:47 - It's supposed to be just one page usually with organizational and corporate. That's generally the recommendation.    

MARTIN SPLITT 52:00 - I guess it doesn't matter which-- it doesn't matter as much which pages, just like do not put it on every page that you've got, I guess, is the more important bit. I think it depends on-- if you're a news site, it probably makes sense to put it in the contact page, or the about page, or something. Whereas in a shop or restaurant website, it's probably OK to put it on the homepage.  

JOHN 52:20 - I think also, in this case, it doesn't matter for us as much because we need to be able to find it on somewhere like the home page or contact page. But if we have it elsewhere, it doesn't change anything for us. So the big thing to kind of not compare it to is the review markup where we sometimes see people put like company review on all pages of the website with the hope of getting the stars and the search results for every page on their site. And that would be bad for us. But the contact information, if you have that marked up, that's-- I don't see a problem with that.  

 


Summary: It doesn’t really matter. It should be on the contact page and probably your about page and home page. Having location schema across the site is not a problem, but also is not likely necessary.


Why is the new PageSpeed Insights tool, based on Lighthouse so much harsher when scoring websites?

53:05

John Mueller January 22 2019

Marie Haynes: It's not my question, but to give some context, the new Lighthouse data for page speed is way more harsh than Page Speed Insights used to be. So something that had a score of like 80 on Page Speed Insights might be a red 29 on Lighthouse. So that's a good question. Is that likely to cause-- because we know that in mobile, very slow sites could be demoted. So is it good if we say, you know, if you're in the red on the Lighthouse test, that we should really be improving things because it could cause a demotion, or is there a cutoff?  

Answer  54:07 - So we don't have a one-to-one mapping of the external tools and all we use for social. Yeah, that makes it really hard to say, but in search, we try to use a mix of actual, what is it, lab test data, which is kind of like the Lighthouse test, and the Chrome UX report data, where essentially what we're measuring is what users of the website would be seeing.  

MARTIN SPLITT 55:37 - It's also important to see that Lighthouse, for instance, specifically measures for a 3G connection on the median end-- or like a medium performance phone, yeah. So, basically, if you on a recent Apple McIntosh or a recent fast Windows computer with a really good like wired connection or really good Wi-Fi connection in your office, of course, you're seeing a load time of two seconds, but a real user with their phone out in the wild is probably not seeing that. So it's one of these cases like it never hurts to make your website faster, but this is really, really hard to say like how it would look like from the insides perspective, as we are using very specific metrics that are not necessarily mapping one to one to what the tools are exposing....of course it's important to fix that, because you don't want your users to wait on your website forever. That's going to hurt you. That's going to hurt your users. That's just going to hurt you in search. But I don't pay-- I would say just look at the tools. If the tools are telling you you're doing well, then you shouldn't worry too much about it. If the tools are telling you you're doing really not good, then I think the time spent on figuring out why it says-- like, if what it says is relevant, it's wasted. You should rather look at making the site faster.  

Mobile performance is a very important factor for your users, as well as everything else. So I would say make sure that your website performs well in real-world conditions. Maybe even get a cheap phone and try the website every now and then, and if-- that's something that I like to do, and I used to do before I joined Google with the development team that I was working with. I was, like, look, do you want to use this website on this phone? It was like, oh, this is horrible. I'm like, mhm, yeah, so maybe we should do something about it.  

JOHN - In Chrome, you can set it up and try different connection speeds. Mobile emulator. I think that's really good things to look at, and also, like, look at your user base. Look at your analytics data if you're seeing that people are only using your website with a high end iPhone, then maybe it's less of a problem than if you're seeing that people are connecting to your site from random rural connections, which are slow, and they have low-end devices, then maybe that's more.


Summary: Lighthouse measures speeds for a 3G connection, so most sites will work much more quickly than displayed here for the majority of sessions. Note: After this hangout was finished, Martin went on to say that it’s the “first contentful paint” that is most important in regards to potential ranking demotions.


If you like stuff like this, you'll love my newsletter!

My team and I report every week on the latest Google algorithm updates, news, and SEO tips.

Video and Full Trancript

Question 1:32 - I was wondering if you had any more guidance or any more point of view on testing in a way that SEOs can use to be more precise and be more confident reporting to the business. We tried this thing, and it made its impact. And we did it in a way that's surefire, sort of the way that Google would recommend.  

Answer 3:20 - So from my point of view, I try to separate out the more technical type things from kind of the quality type changes. So anything that is really a clear technical thing, you can pretty much test does it work or does it not work. It's not a matter of it kind of works or it kind of doesn't work, but a lot of these technical things, especially when you're looking at rendering, or when you're looking at can Google actually index this content, that's something that either works or it doesn't. Where it gets tricky is everything where it's about-- it's indexed, but how does it show up in rankings? And I think for a lot of that, there is no absolute way to really test that. Because if you test that in an isolated situation, like you make a test site, and you set it up with whatever recommendations you have, you can't really assume that a test site will perform the same way that a normal website will. There are sometimes simple things, like if it's a test site, then maybe we won't do full rendering because we think it doesn't make sense to spend so much time on this and this, because, like, nobody looks at it. It never shows up in the search results, why should we bother putting so much resources behind that? Whereas if you did that on a normal website, that would work very differently. So that's something where I don't really have any clear guidance on what you should do or what you shouldn't do. It seems more like the kind of thing where you look at the general trends where you see that website is showing up, kind of the changes in rankings that you're seeing for those queries, and try to apply the best practices there.  

Question 5:21 - So maybe if-- a concrete example maybe you can use that to-- maybe that would be helpful, so something like a title tag test, right? If you were doing that-- what, if anything, should we look for? Or is there anything to look at to disentangle-- is this due to our test or is this due to something changing about the server, the algo, the competitive dynamics? [LAUGHTER] Assuming that we're doing other things to look at those externalities.  

Answer 5:56 - I think a title tag change is actually pretty complex on our side in that there are kind of the interplay of does Google use a title tag that you're actually providing, on the one hand, for ranking, on the other hand, for showing in the search results. Like for desktop and mobile, we have a different amount of room, so we might show slightly different title tags. Users might respond to that in different ways. So you could be ranking the same way, but users might think, oh, this is a great page. I will show it higher-- or I will I will click on it in the search results because it looks like a great page. And then you have more traffic. But the ranking is actually the same. So is that like a good thing? Probably, I guess. If you're just looking at the ranking, then that would look like, well, you didn't change anything from anything, and we just got more traffic.  

But that's something where there are lots of different aspects that kind of flow in there. So that's something where I think, as an SEO, it's useful, on the one hand, to have the technical understanding of what things happened, but also to kind of have that more marketing and quality understanding of how do users react to change, what are long term changes that we could affect with users, how can you drive that to make sure that our site is seen as the most relevant site in situations where people are searching for something related to that. And I think that's something that's really hard to test. It's, like, even in traditional marketing where they have had years and years of practice, it's really hard to test, like, is this actually having a big impact or not? It's something where they end up looking at the bigger picture or doing user studies, which you can do as an SEO as well. Sorry. [LAUGHTER]  


Question 7:58 - I have a more direct question. So a number of our sites, you know, are utilizing Cloudflare, and we noticed that they directly block Googlebot, right? But it hasn't had much impact on our rankings, on our visibility, et cetera. So kind of trying to figure out how, like, are you guys using another bot to crawl an index outside of the Googlebot directly, and how should we be thinking about that when CDNs are going out of their way to block the bot?  

Answer 8:33 - Yeah, I don't know how that's setup with Cloudflare at the moment, but I know in the past they used to block people that are faking Googlebot. So if you use, like, your own, I don't know, Screaming Frog or something-- you say, I'm using Googlebot user agent, then they would block that because they could tell, like, this is not a legitimate Googlebot. We can block that.  But for the most part, I think they have enough practice to recognize normal Googlebots and to let that crawl normally.


Question 9:02 - Yeah, it was interesting. Because reached out to a bunch of colleagues at other agencies, and they were replicating similar situations even on their own site. Like, there was a support ticket within Cloudflare, and that was also being blocked when I was just trying to render directly from Googlebot or Googlebot smartphone.  

Answer 9:21 - OK, yeah. Well, we don't have any work-around. Like, if websites are blocking us, we're kind of stuck. But usually if a service like Cloudflare were blocking us by default, then that would affect a lot of websites. And we would notice that. We would probably reach out to Cloudflare about something like that. It might be that maybe they have different service tiers. So it's like if you're at the lower tier, then it's like a free service, but we have a limit with the amount of traffic. I don't know if they have something like that. That's something I've seen with some other hosters, where if you have kind of the free default hosting set up, then sometimes they just have a traffic cap, and they block things.

MARTIN SPLITT: You might not immediately see that in the stats from how you're ranking and stuff. Because basically, if we have content from you, and basically, the website is not-- depending on what blocked, in this case, means because I haven't seen that. And I run a few websites behind Cloudflare, and I have not had any problems. But then again, I don't have a gigantic website or like huge amounts of traffic because I'm on the free plan. That's-- but if you don't get like an error on our side, it-- it might just be that we are keeping the content that we've seen the last time. And that just ranks well, and that's OK.  

Answer  12:09 - Yeah, so I think what would happen in a case like yours is we would slow down the crawling. And we would try to keep the content that we can fetch more on index, and we would just recrawl a little bit longer. But that also means if you make changes on your website, it will take longer for us to pick that up. If we have to recrawl things significantly, like, I don't know, AMP, or you add some such or that across the whole site, then that's all going to take a lot longer. So if you really regularly see that we can't crawl with a normal Googlebot, then that's something I take up with the host so we can see.  
 
Question 14:01 - Great. So I have a question about the disavow tool. So we get people all the time who want us to do link audits. And ever since Penguin 4.0, so September of 2016, where Gary Illyes said, and I think you said as well, like, Google's pretty good at ignoring unnatural links. So my thought at that time was, well, we shouldn't have to use the disavow tool to ask Google to ignore links that they're already ignoring, unless you had a manual action for unnatural links. So we've been only recommending it for sites that have been actively, you know, building links, trying to manipulate things, things that are unnatural links. But I think there's so much confusion amongst webmasters because I see people all the time, you know, charging tons of money to audit-- to disavow links that are just-- I know they're being ignored. So I would love if we could have just a little bit more clarification. So maybe if I can give you an example, like, if there was a business owner who, a few years ago, hired an SEO company, and that SEO company did a bunch of guest posting just for links and, you know, stuff that was kind of medium quality, if you know what I mean, not ultra spammy, can we be confident that Google is ignoring those links? Or should we be going in and disavowing?  

Answer 15:22 - I think that was a good question. So from my point of view, what I would look at there is, on the one hand, definitely the case is where there is a manual action. But also, the cases where you also want to be seeing a lot of manual actions would say, well, if the web spam team looked at this now, they would give you a manual action. Kind of the cases where you'd say, well, the manual action is more a matter of time and not kind of like it's based on something that was done-- I don't know-- where it's clearly done a couple of years ago, and it was kind of borderline not. But the kind of stuff where you look at it and say, if someone from the web spam team kind of got this as a tip, they would take a manual action, and that's definitely the kind of thing where I would clean that up and do like a disavow for that. Yeah, I think it's hard to say if there is like a specific timeline. But in general, if the webspam team looked at this and said, like, things have moved on. This was clearly done a couple of years ago. It was not totally malicious. Then they probably wouldn't take manual action for that.  

Question 16:43 - And I'm assuming you probably can't answer this, but is there any way that-- like, so say we didn't get a manual action, or they didn't get a manual action. Can those links hurt them algorithmically? Because we feel like we're seeing some improvements in some sites, you know, after disavowing. So, again, I know it's always-- it's never black and white.  

Answer 17:03 - That can definitely be the case. So it's something where our algorithms when we look at it and they see, oh, they're a bunch of really bad links here, then maybe they'll be a bit more cautious with regards to the links in general for the website. So if you clean that up, then the algorithms look at it and say, oh, there's-- there's kind of-- it's OK. It's not bad.  

Question 17:24 - It's still good to basically disavow just to prevent a manual action, correct?  

Answer 17:29 - I think if you're in a case where it's really clear that the web spam team would give you a manual action based on the current situation, then that's what I would disavow.  

Question 17:37 - So it's good to think like at Google's-- like someone in the Google spam team just think, like, you know, if they look at this, what would they do if they do?  

Answer 17:47 - Yeah.  

Question 17:48 - The problem is, though, that most people don't know. I mean, the average business owner don’t know which links the web spam team would-- I mean, there are guidelines, but it's very-- you know, it's hard to interpret those. So I think-- I mean, I have a couple of concerns, but my main concern is there's people spending tons of money on link audits that I think are not worth it. On the other hand, we may be not doing link audits and disavowing for some sites that could benefit from it. So I'd love to, you know, I think what you've said has helped a lot so that we'll-- you know, that's good.  

Answer 18:22 - Yeah, I think for the vast majority of sites that kind of have that normal mix of things where it's like you followed some bad advice in the past, and it's like you moved on, and things are pretty natural now, then they really don't have to do that. That's kind of the goal with all of this, and that's why the disavow tool isn't like a main feature in Search Console. You kind of look for it explicitly. That's all done on purpose because for most sites, you really don't need to focus on links that much. What I do kind of like about the disavow tool, though, is that if you're worried about this, you can still go there and be like, OK, I know there is like these handful of things that we did a couple of years ago, and I'm really worried about that. Then disavowing them, from my point of view, is not a problem. I wouldn't go out and specifically search for all of that stuff. But if you know about it, and you're really worried about it, then you can kind of take care of it. 

Question 19:27 - I have a question about one of our client websites. So they have club on-- they have club in three cities in New South Wales. And each club has a subdomain on the website. Now when they add any page to their website, they create the page for each subdomain. So recently, they have added a page, which is about their club activities, and they have added this page to their-- all subdomains. So it means that all of the subdomains have the same content, except the title of the page is different. Because when they add to the-- for Sydney, they add their location name in the title tag. When they add it for Newcastle, they add Newcastle in the title tag, but the rest of the content on the page are same. So will it be a problem because they have 50 subdomains, and they have created 50 pages, which have the same content except the title?  

Answer 20:36 - That sounds like something that's a bit inefficient, I guess. So I mean, you're already bringing it up and kind of like saying, this seems like something that could be done differently. I think if you're in the case where you have 50 subdomains that all have the same content, and you're just changing the title tag, then you're probably not providing us a lot of really useful content. So that's the situation where I would say, it makes more sense to combine things and make really strong pages rather than to dilute things across even more subdomains.  

Question 21:44 - What about creating one page and then use canonical URL to other location pages. I just want to create one page, which we'll talk about their activities, and I will use the link as a canonical URL to other location pages.  

Answer 22:10 - Location-- yeah. I think that could make sense because then you're combining things again. Then you're making one strong page rather than a bunch of kind of diluted pages.  

Question 22:20 - Because what happens when someone visits the website, and they choose their location, they automatically redirect that person to that subdomain for which they indicated for their certain location. So that is the reason they need the page on that subdomain. So I think that is why if we keep the one page and add the canonical URL, that has-- it's the only option we have at this moment.  

Answer  23:08 - OK, but I think if you have separate pages that you need to have them for kind of technical reasons on the site, and you put canonical, that's a good approach.  

Question 23:21 - Would that be like-- like a business that has multiple franchises in different locations that would essentially have the same content for each franchise would be in a different city or township, or whatever, and kind of funnel that from your point of view back to a single page?  

Answer 23:34 - I think that's always a bit tricky because you're balancing people looking for that kind of business in a specific location with kind of informational page around that business directly. So that's something where sometimes it makes sense to have that separate for a business. Sometimes it makes sense to have kind of the general information in a central place and to have just like-- like location landing pages that are more focused on the address, opening hours, those kind of things.   

Question 24:12 - Yeah I have-- I have a related question as to the canonical point that you were making. This is a question that my team and I have had for a number of years. And we still don't exactly know the solution. So if you're handling a big e-commerce site with many, many products and many, many categories. Let's say you're on a category page that has a lot of different filters and facets and things of that nature that slightly change the page content, but maybe not enough to justify having its own URL. But maybe in some cases with certain filters, it does justify having a site URL. So how do you handle crawling in that situation? Like does a canonical tag work? Is it a blanket solution to just create one page that's indexed? Or should you look at, you know, no-indexing certain facets and filters, or using robots, or how do you kind of control that for large e-commerce sites?  

Answer 24:57 - That's tricky. I don't think we have really clear guidance on that at the moment. So that's something where all of those different kind of methods can make sense. In general, I try to avoid using robots.txt for that because what can happen is we find the URLs. We just don't know what is there. So unless you're really seeing problems that were causing too much load on the server, I try to use things like the no index, use the rel canonical. Maybe you use rel no-follow with internal links to kind of funnel things to make it a little bit clearer on what we should crawl an index rather than using robots.txt. But kind of the decision on when to combine things into an index-level page and when to block it from indexing, when to kind of softly guide it towards like one canonical URL, that's-- that's really tricky sometimes.  

Question 25:53 -  Because sometimes the canonicals get ignored if the page content is too different.  

Answer 25:57 - Exactly. If the content is different, then we might say, well, these are different pages. We shouldn't use the canonical. Whereas you might say, well, this is really something I don't want to have indexed. Maybe a no-index would make more sense than a canonical. You can also combine both of them. We don't recommend combining them because it's really tricky for us to say, well, what do you mean? Are you saying these are the same, but one is indexable and one is not indexable, then they're not the same. But it's something where I imagine over the course of the year will have some clear guidance on what could work there. But especially with really large e-commerce sites, the crawling side can be quite challenging.  

Question 26:46 - So there's a scenario that I've been trying to figure out with a couple of my customers lately. We're trying to figure out why we're not able to essentially junk a site that's still using HTTP and more or less looks abandoned because the page haven't been updated in a while, and the content is old, outdated, and generally kind of thin, and so I've got a couple of theories about it. Part of it is, I think, maybe it's been in the index so long that y'all kind of have a trust factor built up with them. And it's kind of hard to unseat them. That's part of my theory on that. So I'm just trying to figure out what's going on because I know HTTPS is a factor. I don't know how much of a factor it can be, but I also think the age might be part of the problem of trying to provide that newer, fresher content that-- in most cases, what we have done over last year is a lot more thorough than what was written, say 10, 12 years ago. So we're trying to figure out why is it taking so long to essentially move ahead of those pages in a lot of cases.  

Answer 27:46 -  So HTTPS is a ranking factor for us. But it's really kind of a soft ranking factor.  It's really a small thing.

Question 27:55 - One of the things I've noticed about when I encounter sites that are still using HTTP is they haven't really-- they haven't been updated, in general, in two or three years, usually. So to me, it's kind of like they've almost been abandoned. To me I'm looking at it as a signal of freshness and stuff like that.  

Answer 28:10 - Yeah, I mean, freshness is always an interesting one, because it's something that we don't always use. Because sometimes it makes sense to show people content that has been established. If they're looking at kind of long-term research, then like some of this stuff just hasn't changed for 10, 20 years.

Question 28:30 - I'll give you a pragmatic examples since I'm a web developer. I see pages that were written, say in 2006 or 2007. They haven't actually been changed, but the web standards, web specifications, or just the general way of handling those things has evolved. But that page is still written as if it's 2006. And yet I've got something that's fresher, you know, that's more in depth and things like that, and I'm like at number 11. They're sitting at number four, for example, like, why are they still up there, you know?  

Answer 28:59 - Yeah. It's hard to say without looking at the specific cases. But it can really be the case that sometimes we just have content that looks to us like it remains to be relevant. And sometimes this content is relevant for a longer time. I think it's tricky when things have actually moved on, and these pages just have built up so much kind of trust, and links, and all of the kind of other signals over the years, where like, well, it seems like a good reference page. But we don't realize that, actually, other pages have kind of moved on and become kind of more relevant. So I think long-term, we would probably pick that up. But it might take a while.

I don't know if we call it trust or anything crazy like that. It's more-- it feels more like we just have so many signals associated with these pages, and it's not that-- like, if they were to change, they would disappear from rankings. It's more, well, they've been around. They're not doing things clearly wrong or for as long time, and people are maybe still referring to them, still linking to them. Maybe they're kind of misled in kind of linking to them because they don't realize that, actually, the web has moved on. Or maybe, I don't know, a new PHP version came out, and the old content isn't as relevant anymore. But everyone is still linking to, I don't know, version 3 or whatever.

Question 30:42 - But I've also seen that kind of in the health and fitness space as well, you know, like workout types were more popular 10 years ago, but the particular, you know, approach to it isn't necessarily as popular now or been kind of proven to not necessarily be as good. You know, it's just some other general observations I've made too.  

Answer 31:06 - Yeah, I think it's always tricky because we do try to find a balance between kind of showing evergreen content that's been around and kind of being seeing more as reference content and kind of the fresher content and especially when we can tell that people are looking for the fresher content. But we'll try to shift that as well. So it's not something that would always be the same.   

Question 32:20 - "We have a large e-commerce site that's not in the mobile-first index yet. We know we serve different HTML for the same URL, depending on the user agent. Could this harm us?

Answer 32:38 - So you don't have a ranking bonus for being in the mobile-first index. So it's not that you need to be in there. But it's more a matter of when we can tell that a site is ready for the mobile-first index, then we'll try to shift it over. And at the moment, it's not at the stage where we'd say, we're like flagging sites with problems and telling them to fix things. But more where we're just trying to get up to the current status and say, OK, we've moved all of the sites over that we think are ready for mobile-first indexing. And kind of as a next step, we'll trying to figure out the problems that people are still having and let them know about these issues so that they can resolve them for mobile-first indexing. So it's not that there is any kind of mobile-first indexing bonus that's out there. It's more that we're, step by step, trying to figure out what the actual good criteria should be.

Question 33:40 - Given that the search quality guidelines are an indication of where Google wants its algorithm to go, how does the current algorithm handle measuring the expertise and credibility of publishers?

Answer 33:59 -  I don't know. I think that's probably hard to kind of figure out algorithmically. And if there were any kind of technical things that you should do, then we would let you know. So if there are things like authorship markup that we had at some points that we think would be useful for something like this, we would definitely bring that out there. But a lot of things are really more kind of soft quality factors that we try to figure out, and it's not something technical that you're either doing or not doing. It's more, like, trying to figure it out how a user might look at this. So not anything specific that I could point at.  


Question 34:44 - Is that reasonable to assume that if something is in the Quality Raters’ Guidelines that Google-- I mean, that's what Ben Gomes said, right? That's where the Google wants the algorithm to go. So I mean, we may be guilty putting too much emphasis on the Quality Raters’ Guidelines, but it's all good stuff in there, right? So is it reasonable to make that assumption? Like, if it's in there, we should aim for that sort of standard of quality?  

Answer 35:09 - I think, in general, it's probably good practice to aim for that. I avoid trying to focus too much on what Google might use as an algorithmic factor and look at it more as-- we think this is good for the web, and, therefore, we will try to kind of go in that direction and do these kind of things. So not so much it's like I'm making a good website just so that I can rank better, but I'm making a good website because when I do show up in search, I want people to have a good experience. And then they'll come back to my website, and maybe they'll buy something. So that's kind of the direction I would see that, not as, like, do this in order to rank, but do this in order to kind of have a healthy, long-term relationship on the web.  

Question 36:10 - Is there a particular type of schema that is more likely to obtain featured snippets of voice search results?

Answer 36:18 - I don't know. I can't think of anything offhand. So there is the speakable markup that you can use, which is probably reasonable to-- kind of look into to see where it could make sense on a page. I don't think we'll use that in all locations yet.

Question 36:41 -  Is that the goal to us it in more locations?

Answer 36:47 -  I believe-- I guess. I mean, it's always a bit tricky because, sometimes, we try them out in one location, and we try to refine it over time. And usually, that means we roll it out in the US, and where we can kind of process the feedback fairly quickly, we can look to see how it works, how sites start implementing it or not. And based on that, we can refine things and say, OK, we're doing this in other countries, and other languages, and taking it from there. But it's not always the case that that happens. Sometimes it happens that we keep it in the US for a couple of years, and then we just said, oh, actually, this didn't pan out the way that we wanted it. So we'll try something new, or we'll give it up. Yeah. But a lot of the structured data types, we do try to roll out in other countries, other languages. I imagine the speakable markup is tricky with regards to the language. So that's something more where we'd say, well, Google Assistant isn't available in these languages. So, like, why do we care what markup is actually used there.  

I don't know how many places this system is available yet. Maybe that's everywhere now. But featured snippets, in particular, I don't think we have any type of markup that's specific to that. So that's something where if you have clear kind of structure on the page, that helps us a lot. If we can recognize like tables on a page, then we can pull that out a lot easier. Whereas if you use fancy HTML and CSS to make it look like a table, but it's not actually a table, then that's a lot harder for us to pull out.  

Question 38:37 - John, do internal links help with featured snippets if you have an anchor? Sorry, not an internal, like, an anchor like-- do you think that that would help?  

Answer 38:48 - I don't know. I do know we sometimes show those anchor links in search as a sub site link-type thing. But I don't know if that would work for featured snippets.

Question 39:04 - Does cross domain site map submissions still work when 301 redirecting to an external sitemap file URL?

Answer 39:16 - Hopefully.  

Question 39:17 - What about using meta-refresh? This was something that was recommended by a video hosting company. People said, we'll host the site map on our site, but, you know, the XML file will metarefresh over to our site where all the links are located.  

Answer 39:33 - I don't think that would work. So sitemap files are XML files, and we process those kind of directly.

So if you do something that's more like a JavaScript redirect or that uses JavaScript to get us the sitemap content, then that would work. It would really need to be a server-side redirect. What you can also do is use the robots.txt file to specify a sitemap file on a different host. That also confirms to us that actually you told us specifically to use a sitemap file from there. So I probably use something like that more than any kind of a redirect. I imagine the 301 server-side redirect would work. But, no, I don't know, you should be able to see some of that in Search Console too, like, if we're picking the sitemap up in the index composition tool, you can pick the sitemap file, then that's a pretty clear sign that we can process that.

Question 42:29 - It was about the travel agency websites for traveling. We choose internal search to show dynamic content, like the top 10 most cheap hotels in the search city, OK? So the page frame loads in an instant. But the top 10 cheapest hotel results load dynamically in like 30 seconds since the search has been performed because the website has to perform this search in the back and then compares and refines the results in order to list, for the searcher, the cheap top 10 hotels. So it takes a little time to list them. So right now, Googlebot sees only the background of the page. Then those 10 empty placeholders were where the results will load a little later after the internal search has been performed. So since this is a trend for travel websites to bring information as fresh as possible and as accurate as possible, I'm thinking, what Google is doing about this. Of course, we can list some static content on these pages like all other websites are doing nowadays for Google, if I may say so. But that kind of defeats the purpose of about what most users want to see now, fresh and cheap.   

Answer 44:00 - So if it doesn't load there, then we can't index it. But usually, that's more a matter of us not being able to process the JavaScript or maybe being blocked from actually accessing the content there. So it's something that you can do in a way that would work. It's not by design that it would never work. So that's something where you can dig into the details using things like the mobile friendly test to see if there are JavaScript errors involved, if things are blocked, and kind of refine it from there. So it's not impossible, but it takes a bit of work.  

Question 44:42 - John, I dig into that with make sure that nothing is blocked from Google. The only thing we want Google to do is to wait a little bit for the dynamic content to load onto the pages, you know? This is the next step, if I may say so because while this page is not like an endless scroll, let's say Facebook, it's a limited 10 results page. So it's finite. It's limited. The thing is Google should be waiting a little bit for the dynamic content. I'm only giving you an example. But I'm sure there are a lot of other examples out there in the wild. And because since this is the trend for people to see dynamic content because they somehow sort things and the time is less and less they spend-- the people spend less and less time on the websites, and they want to find as fast as possible the perfect results, if I may say so. I was wondering if you guys are looking toward this enhancement.  

Answer 45:55 - So we do wait a bit for rendering. But if people are impatient, then that's a sign that you should be faster anyway. So that's something where I would look into that anyway. But I think looking at the screenshot, like, all of the items there were blocked in just gray boxes. So that feels like something that's more of a technical issue rather than just a time out probably.

MARTIN SPLITT: Yeah. I was about to say, like, we do see a lot of dynamic content that gets indexed without issues, even if it uses like JavaScript and stuff. So if we're timing out, then you might have an issue in terms of how long the searches take, and that might actually reflect elsewhere as well. We wait quite a while for content to finish.  

Question 46:48 - Can you-- can you give me a time frame? How much do you wait?

MARTIN SPLITT: It's really, really tricky because basically-- so the thing is, the reason why we can't give you a time frame is because time-- and this is going to sound really weird, and bear with me for a second. Time in Googlebots rendering is special and doesn't necessarily follow Einstein's principles. [LAUGHTER] So I can't really say much. What I can say is if the network is busy and the network is the bottleneck, we're probably waiting, but we only wait for so long. So if you take like a minute or 30 seconds, then we are probably timing out in between. But there's no hard-- if I tell you 10 seconds, that might or might not work. If I tell you 30 seconds, that might or might not work. So I'd rather not say a number. What I would say, try to get it in as quickly as you can. And if you cannot get it quick, then try like something like caching the search results so that the search becomes more or less in-- or producing the results on the page becomes more and more-- more or less instant. Or try dynamic rendering on your side that might be a workaround for this. What you can also try is you can try to like put it on the server side and basically try to generate as much content as possible in the first pass. That's something that also benefits your users, especially if they are on slow networks. So yeah. Sorry I don't have any simple answer, but time in Googlebot is funky.  
 
Answer 49:29 - I think it probably depends on what the website actually does. One of the things that's tricky with the speed in rendering is we can cache a lot of stuff that's sent from a server more than it would be in a browser, because we can use our index for a lot of these things. So sometimes if JavaScript is cached on our side, we don't have to fetch it.  Then if you compare the times again, then it won't match what a user sees. It won't match what you see in webpagetest.org. So it's really kind of tricky, and for the parts where we do know it takes longer, we'll be a bit more patient. But it makes it tricky to test. That's why we have all of these testing tools now that show as many errors as possible to make it possible to figure out, like, does it not work at all? Does it sometimes work and where are sometimes the issues that come up?  

Question 50:29 - For very large websites, does the order of URLs in the XML sitemaps matter?

Answer 50:34 - No. We don't care. It's a an XML file. We pull in all of the data. We process it all at once.

Question 50:44 - Then what about the priority parameter in sitemaps?

Answer 50:47 - We don't use that at all. So that's something that in the beginning, we thought, oh, this might be useful to figure out how often we should crawl pages. But it turns out if you ask webmasters, they're like, everything is a priority one. It's most important. And similar also with the, I think, change frequency in sitemaps, where we also notice that, like, people tell us like things are changing all the time, but it was last updated last year. So if you have the change frequency and the date, then we would get that information from the date anyway. So we ignore the change frequency.
Question 51:35 - Should corporate schema be added to just the home page, contact page, or all pages?

Answer 51:42 - As far as I know, it's just the home page. I don't know. Do you any of you know?  

Answer from Liz 51:47 - It's supposed to be just one page usually with organizational and corporate. That's generally the recommendation.    

MARTIN SPLITT 52:00 - I guess it doesn't matter which-- it doesn't matter as much which pages, just like do not put it on every page that you've got, I guess, is the more important bit. I think it depends on-- if you're a news site, it probably makes sense to put it in the contact page, or the about page, or something. Whereas in a shop or restaurant website, it's probably OK to put it on the homepage.  

JOHN 52:20 - I think also, in this case, it doesn't matter for us as much because we need to be able to find it on somewhere like the home page or contact page. But if we have it elsewhere, it doesn't change anything for us. So the big thing to kind of not compare it to is the review markup where we sometimes see people put like company review on all pages of the website with the hope of getting the stars and the search results for every page on their site. And that would be bad for us. But the contact information, if you have that marked up, that's-- I don't see a problem with that.  
Question 53:05 -The Google website speed test we've been using has been recording very slow, page load times, but the independent tests we did with colleagues overseas displayed a very quick page load time. Does this false recording Google measures affect how our site is ranked in the Google algorithm?

Marie Haynes: It's not my question, but to give some context, the new Lighthouse data for page speed is way more harsh than Page Speed Insights used to be. So something that had a score of like 80 on Page Speed Insights might be a red 29 on Lighthouse. So that's a good question. Is that likely to cause-- because we know that in mobile, very slow sites could be demoted. So is it good if we say, you know, if you're in the red on the Lighthouse test, that we should really be improving things because it could cause a demotion, or is there a cutoff?  

Answer  54:07 - So we don't have a one-to-one mapping of the external tools and all we use for social. Yeah, that makes it really hard to say, but in search, we try to use a mix of actual, what is it, lab test data, which is kind of like the Lighthouse test, and the Chrome UX report data, where essentially what we're measuring is what users of the website would be seeing.  

MARTIN SPLITT 55:37 - It's also important to see that Lighthouse, for instance, specifically measures for a 3G connection on the median end-- or like a medium performance phone, yeah. So, basically, if you on a recent Apple McIntosh or a recent fast Windows computer with a really good like wired connection or really good Wi-Fi connection in your office, of course, you're seeing a load time of two seconds, but a real user with their phone out in the wild is probably not seeing that. So it's one of these cases like it never hurts to make your website faster, but this is really, really hard to say like how it would look like from the insides perspective, as we are using very specific metrics that are not necessarily mapping one to one to what the tools are exposing....of course it's important to fix that, because you don't want your users to wait on your website forever. That's going to hurt you. That's going to hurt your users. That's just going to hurt you in search. But I don't pay-- I would say just look at the tools. If the tools are telling you you're doing well, then you shouldn't worry too much about it. If the tools are telling you you're doing really not good, then I think the time spent on figuring out why it says-- like, if what it says is relevant, it's wasted. You should rather look at making the site faster.  

Mobile performance is a very important factor for your users, as well as everything else. So I would say make sure that your website performs well in real-world conditions. Maybe even get a cheap phone and try the website every now and then, and if-- that's something that I like to do, and I used to do before I joined Google with the development team that I was working with. I was, like, look, do you want to use this website on this phone? It was like, oh, this is horrible. I'm like, mhm, yeah, so maybe we should do something about it.   

JOHN - In Chrome, you can set it up and try different connection speeds. Mobile emulator. I think that's really good things to look at, and also, like, look at your user base. Look at your analytics data if you're seeing that people are only using your website with a high end iPhone, then maybe it's less of a problem than if you're seeing that people are connecting to your site from random rural connections, which are slow, and they have low-end devices, then maybe that's more.