A great hangout with John on January 11, 2019. There are lots of great tips in this one! Changes to Google Search Console, Mobile First Indexing, and TLDs!

Changes happening to Search Console

1:08

John Mueller January 11 2019"On the Search Console side, there are lots of changes happening there, of course. They've been working on the new version. I imagine some of the features in the old versions will be closed down as well over time. And some of those might be migrating to the new Search Console. I'm sure there will be some sections of Search Console, also, that will just be closed down without an immediate replacement, primarily because we've seen there's a lot of things that we have in there that aren't really that necessary for websites where there are other good options out there, or where maybe we've been showing you too much information that doesn't really help your website. So an example of that could be the crawl errors section where we list the millions of crawl errors we found on your website, when actually it makes more sense to focus on the issues that are really affecting your website rather than just all of of the random URLs that we found on these things. So those are some of the changes that I think will be happening this year. And like I mentioned, some of these will be easier to move along with. Others will be a bit trickier. But in any case, there will be changes."


Summary: Some parts of the “old” Search Console will not be migrated to the current version. The crawl errors section likely will not be moved.


Interesting Question about indexing for a site that is Javascript heavy

3:31

John Mueller January 11 2019"...if you deliver a server side rendered page to us, and you have JavaScript on that page that removes all of the content or reloads all of the content in a way that can break, then that's something that can break indexing for us. So that's one thing where I would make sure that if you deliver a server side rendered page and you still have JavaScript on there, make sure that it's built in a way that when the JavaScript breaks, it doesn't remove the content, but rather it just hasn't been able to replace the content yet. "


Summary: If you are worried that JavaScript may be challenging for Googlebot, be sure to build the page in a way that means that the main content is still seen without JavaScript.


Is mobile friendliness the same thing as mobile first indexing?

10:16

John Mueller January 11 2019"The mobile friendliness evaluation is completely separate from mobile-first indexing. So that's something that's being done completely independently. The indexing side is more a technical matter of switching things over to a mobile crawler. And using the mobile version for indexing and mobile friendliness is more a matter of recognizing when a page works well on mobile devices so that we can show it a little bit higher in the search results. "


Summary:  These are two separate things.

Note: We have just put together a very thorough post to answer all of your questions on mobile first indexing.


Interesting information on how Google gathers information from a page 

10:50

John Mueller January 11 2019"I think what you're seeing there is probably related to the things that you touched upon. On the one hand, it might be that we're looking at older versions of the page. When we index a page, obviously we index it in one stage, and then we need to be able to test the indexed version. And depending on the timelines there, it might be that it takes a little bit of time for us to connect the two sides. So there could be some delay between the indexing of the HTML page and the rendering, which means we'd ideally still be able to render that page a little bit later. I don't know if five months is reasonable there. I think that that feels a bit long. "


Summary:  Googlebot gathers data and indexes it in one stage, and then it takes time for Google to test the indexed version. If you have contend that requires rendering, it may take a while for that content to rank well.


 

Possible explanation for “temporarily unreachable” in Fetch and Render

12:30

If you have seen this error message, it can be confusing. John explained a possible reason why it may show.

John Mueller January 11 2019“The other thing that I've seen here is that sometimes it's more a matter of us just temporarily not being able to fetch those resources, which is more a matter of we'd like to crawl more from a website. But we temporarily can't do that because we have so many things lined up for that website. We don't want to overload it. And that's one thing I've seen sometimes trigger these issues in Search Console. And that, in general, is not a problem because we can line those resources up and crawl them a little bit later. But our systems internally will flag it like we couldn't test to see if this is really mobile friendly or not. And then Search Console is a bit too helpful and alerts you of all of these issues right away. So that's one thing we're going to work on to make sure that those alerts are a little bit more based on the stable version that we would use for indexing."


Summary:  Sometimes you will see “temporarily unreachable” in Google Search Console because Google doesn’t want to overload your site by crawling it too much. This usually resolves over time.


Are sites that use a CDN treated differently by Google?

13:32

John Mueller January 11 2019"And that is still the case. So for us, a CDN is not anything specific that we would explicitly call out and say, this is something that we need to treat specially, but rather just a way of hosting a website. And that's something that has implications with regards to how fast your website can load, where it's available to the audience, and all of that. That's perfectly fine. "


Summary:  If a CDN causes your site to load faster, this is good. But otherwise, Google doesn’t treat sites any differently because they use a CDN.


If you change your image urls, will this impact rankings?

16:58

John Mueller January 11 2019"That's a good question. So yes, this will affect your website in Google Images. So in the image search, in particular, if we see changes in URLs with regards to embedded images, then that's something where we will have to go off and first recrawl those images, reprocess them, reindex them, get them all ready for image search again. So if you just change the URLs that are referenced within your pages, then that will result in those images being seen as new images first. And with regards to ranking, they'll have to work their way up again. So setting up redirects, like you mentioned, that's a fantastic way to do that. Because that way, we know the old images are related to the new ones, and we can forward any signals we have from the old ones to the new images. So that's really what you should be aiming for there. This is particularly relevant if a site gets a lot of traffic from image search. And it's something where image search is a little bit different than web search in that with images, we find that images tend not to change as much, so we don't recrawl them as frequently. So if you make significant changes in the image URL structure that you're using, it's going to take a lot longer for us to reprocess that. So in particular for images, you really need to make sure that those redirects are setup. And that's something that oftentimes you don't see directly because you load the page, it refers to the new image URLs. You don't realize that actually those redirects between the old image URLs and the new image URLs is missing. So if you get a significant amount of traffic from Google Images, make sure that you have those details covered. "


Summary:  If you change image URLs, you will likely lose rankings for those images and it will take a significant amount of time for rankings to return. Setting up redirects from the old image urls to the new can help.


 

More on the change of image urls

19:13

John Mueller January 11 2019"That's a good question. So yes, this will affect your website in Google Images. So in the image search, in particular, if we see changes in URLs with regards to embedded images, then that's something where we will have to go off and first recrawl those images, reprocess them, reindex them, get them all ready for image search again. So if you just change the URLs that are referenced within your pages, then that will result in those images being seen as new images first. And with regards to ranking, they'll have to work their way up again. So setting up redirects, like you mentioned, that's a fantastic way to do that. Because that way, we know the old images are related to the new ones, and we can forward any signals we have from the old ones to the new images. So that's really what you should be aiming for there. This is particularly relevant if a site gets a lot of traffic from image search. And it's something where image search is a little bit different than web search in that with images, we find that images tend not to change as much, so we don't recrawl them as frequently. So if you make significant changes in the image URL structure that you're using, it's going to take a lot longer for us to reprocess that. So in particular for images, you really need to make sure that those redirects are setup. And that's something that oftentimes you don't see directly because you load the page, it refers to the new image URLs. You don't realize that actually those redirects between the old image URLs and the new image URLs is missing. So if you get a significant amount of traffic from Google Images, make sure that you have those details covered. "


Summary:  Often theme changes cause changes in internal linking which can impact rankings. But you may also see positive changes if your new design is better.


 

Is it ok to serve different content for desktop vs mobile users by hiding some of the content with CSS?

20:36

John Mueller January 11 2019" So first of all, that's perfectly fine for Googlebot. We can deal with that. We can recognize when content is hidden and try to treat it slightly differently. However, it seems like something where maybe you're adding more complexity than you actually need, and where I suspect it's a bit trickier to maintain a website like that where you always duplicate the same content. So my recommendation there would be to try to find a way to use responsive web design to serve this in a way where you're not duplicating the content. That makes the pages a little bit smaller so they load faster, too. And then you don't have to worry about other search engines and how they might handle this. Again, from Googlebot's side, this is fine. It will probably result in both of these blocks being indexed for the same page. But in general, that's not something that would cause any problems. "


Summary: This is not a problem. But, it may be better to consider moving to a responsive design to reduce complexity.


For a business with multiple locations, is it best to have one site or a different site for each location?

22:00

John Mueller January 11 2019" That's ultimately up to you with regards to multiple sites or one site. Personally, I recommend having one strong site rather than having it split up into multiple smaller sites. So maybe having one strong website for the company in general, and then individual landing pages for the individual locations so that people who are searching for those locations, they can find them and see what makes these locations a bit special. But at the same time, where you have one really strong website that is really easy to find in Google when people are searching for your company in general. So that would mean my recommendation there, you can, of course, if you prefer to have separate websites for these, that's something that's an option as well.  The thing I would watch out for here, though, is if you have separate websites is to think about how you want to deal with shared content. So in particular, if you have informational pages about the products or services that you offer-- if you're duplicating these across all of these different websites for the individual locations, then which of these pages would you like to have shown when someone is searching for something general, something about the product or service that you offer? And if you have that spread out across all of these different pages, then they're all kind of competing with each other. So it might make sense to pick one of these locations, say this is the primary location, and this is where I want to have all my general content indexed, and to have the other websites be a little bit more focused on the primary location and just listing their additional information there. So again, this is one of those things where, if you have separate websites, you're almost tending towards a situation where you are sharing a lot of content, and where you try to pick one main location anyway. So you might as well just make one really strong website and just have individual landing pages for the individual locations."


Summary: Either is fine, but it usually makes sense to just have one strong website rather than several individual ones.


Should you have a separate sitemap for images?

31:26

John Mueller January 11 2019" Both of these approaches work. For us, what happens on a technical level is we take all of the sitemaps that we found for your website and we combine them into one big pile, and then we process them there. So how you split that information up into separate sitemap files is mostly up to you. It is something that you can pick a way that works well for your setup, for your DNS, for your server, for your infrastructure, whatever you have. So that doesn't really matter for us. "

 


Summary: It doesn’t really matter


Good tips on what it takes to rank well in a competitive niche

32:12

John Mueller January 11 2019" Another thing to keep in mind is that just because something is technically correct, doesn't mean that it's relevant to users and the search results. That it doesn't mean that it will rank high. So if you clean up your whole website and you fix all of the issues, but, for example, if your website contains lots of terrible content, then it still won't rank that high. So you need to, on the one hand, understand which of these technical issues are actually critical for your website to have fixed. And on the other hand, you really need to focus on the user aspect as well to find whatever issues that users are having, and how can my website help to solve those issues or to help answer those questions?  And that aspect is sometimes a bit tricky. So that's not something that I'd say is always trivial. And for some of these niches, there is a lot of really strong competition from people who have been working at this for a long, long time. And that can make a quite a bit more difficult than something that has a lot less competition. So unfortunately, no simple answer to getting high rankings and lots of traffic. "

 


Summary: It’s not enough to just be technically sound. You need really good content in order to rank well. Also, if there is strong competition that has been around for quite a while, ranking well will be difficult. (Our note: This is likely related to the expertise and authority components of E-A-T)


 

If you’ve switched to https but are still seeing visits to your http pages in GSC, what does this mean?

35:00

John Mueller January 11 2019"So in general, it doesn't mean that they're necessarily issues. But the thing to keep in mind is we do look at websites individually. So our algorithms don't necessarily say, oh, these are two websites from the same person. Therefore, they should be doing exactly the same thing. It can be that we still see issues with one website and that affects significant amount of traffic. It could be that we see the same issues with the other website, but because nobody is searching for those pages, you don't really notice. So that's something where I suspect if you're still seeing one third of your traffic going to the HTTP version after six months of moving to HTTPS, that something somewhere is still not lined up properly. And it could also just be that the way that you're measuring these things is not set up properly. So I would certainly recommend diving into that and figuring out what is happening here. Why are these redirects not working? Or why might Google be showing the HTTP version of these pages?  Or are people going to the HTTP version in some other way? All of those things might be different aspects that you could take a look at there

 


Summary: It can be normal to see the odd http visit for one reason or another, but if after a significant amount of time you are still seeing a good portion of your traffic going to http versions of your pages, something is likely wrong.


Can you rank well globally even if you are using a country specific TLD?

44:10

John Mueller January 11 2019"That's a good question. That's something I hear from time to time. In general, if you have a country code top level domain, your site can still be very relevant globally. So if you're targeting a global audience and you have a country code top level domain, that's perfectly fine. That's not something that you need to change. However, if you're targeting individual countries outside of your own country, then with your country code top level domain, you wouldn't be able to do that. So for example, if you have a Swedish top level domain and you explicitly want to target users in, say, France because you have a special offering that works particularly well for users in France, then you wouldn't be able to geotarget for France with a Swedish top level domain. However, if you're saying my services are global and everyone worldwide should be able to take advantage of them, then you wouldn't see any change by going to a global top level domain versus your country code top level domain. The bigger effect that you might see there, though, is more about from a usability point of view, from a user point of view. If they see a Swedish top level domain in the search results and they're not based in Sweden, would they assume that this might not be something for them?  That's something you can help to resolve by using clear titles in English, for example, or in whatever language you're talking to the user about, and generally making sure that your site is clearly seen as a global web site so when people go there, they realize, oh, this is actually a global business. They're based in Sweden, which is really cool. But it's not the case that they would refuse my request for a service or a product if I had something that I wanted from them. So from that point of view, you don't need to go to a global top level domain if you're targeting a global audience. If you want to target individual countries, then yes, you would need to have something where you can set geotargeting individually. So that would be something like a .com, .eu. All of the newer top level domains, they are also seen as global top level domains."


Summary: This is fine. However, it may be challenging to target users in one other specific country if you are using a TLD from another country. As always, keep user experience in mind as well.


Is it considered cloaking if a site blocks all content from the United States, but makes an exception for Googlebot?

47:19

John Mueller January 11 2019"So yes, that would be cloaking. In particular, Googlebot generally crawls from one location. And if you're looking at a US based web site, we would be crawling from the new US. Generally speaking, the more common IP geodatabases tend to geolocate Googlebot to California, which is where the headquarters are. I don't know if all of the Googlebot data centers are actually there, but that's where they tend to be geolocated to. So if you're doing anything special for users in California or in the location where Googlebot is crawling from, then you would need to do that for Googlebot as well. So in particular, if you see Googlebot crawling from California and you need to block users from California from accessing your content, you would need to block Googlebot as well. And that would result in that content not being indexed. So that's something that, from a practical point of view, that's the way it is. How you deal with that is ultimately up to you. One approach that I've seen sites do, which generally works, is to find a way to have content that is allowed across the different states and in the US so that you have something generic that can be indexed and accessed by users in California and that we could index as well and show in the search results like that. Or alternately, if California is in your list of states that are OK to show this content to them, that's less of an issue anyway."

 


Summary: This would be considered cloaking and goes against Google’s guidelines. Note: There is more information below in the full transcript on this.


 

Is it ok to wrap your logo in an H1 tag?

53:12

John Mueller January 11 2019"You can do that. I don't think Googlebot cares any particular way. In general, the h1 is the primary heading on a page, so I'd try to use that for something useful so that when we see the h1 heading, we know this is really a heading for the page. If for semantic reasons you just want to mark up an image without any alt text or without any text associated to the image, that's kind of up to you. You can also have multiple H1 elements on a page. That's also up to you. That's fine for us. So that's not something where our systems would look down on a page that's using an h1 tag improperly or that doesn't have an h1 tag."

 


Summary: This is ok, but ideally it would be best to make your h1 tag something that helps describe the content of the page.


Does a country level TLD help you rank better in that particular country?

56:23

John Mueller January 11 2019

 

"Yeah. Yeah. "

 

 

 

 


Summary: It can!


If you use rel=prev/next, will Google always show the first page of the series more prominently?

1:02:40

John Mueller January 11 2019

 

"...we try to use rel next rel previous, you understand that this is a connected set of items. But it's not the case that we would always just show the first one in the list. So you could certainly be the case that we show number three, number five, or something like that for a ranking. It can be a sign that maybe people are searching for something that's not on the first part of your list. So if it's something important that you think you'd like to be found more, than maybe you should find a way to restructure the list to put the important items in the front. Make it a little bit easier to have users find them when they go to that list. And accordingly, when we look at that list, usually the first page is highly linked within the website. So we could give it a little bit more weight if it just had that content that we thought was relevant. "

 


Summary: No! Google may show people an inner page in the series if it better matches their query. If you see this happen, you may want to look at putting this content on the first page of that series.


 

If you like stuff like this, you'll love my newsletter!

My team and I report every week on the latest Google algorithm updates, news, and SEO tips.

 

Full Transcript and Video

A Note from John: On the Search Console side, there are lots of changes happening there, of course. They've been working on the new version. I imagine some of the features in the old versions will be closed down as well over time. And some of those might be migrating to the new Search Console. I'm sure there will be some sections of Search Console, also, that will just be closed down without an immediate replacement, primarily because we've seen there's a lot of things that we have in there that aren't really that necessary for websites where there are other good options out there, or where maybe we've been showing you too much information that doesn't really help your website. So an example of that could be the crawl errors section where we list the millions of crawl errors we found on your website, when actually it makes more sense to focus on the issues that are really affecting your website rather than just all of of the random URLs that we found on these things. So those are some of the changes that I think will be happening this year. And like I mentioned, some of these will be easier to move along with. Others will be a bit trickier. But in any case, there will be changes.

Question 3:31 - So we have a website that is very JavaScript heavy. It's written in React. And it uses server side rendering. So what we found out was that after some changes on our website, some time passed, and then suddenly our index results in Google search got completely broken. So we figured out that what was happening was that probably Google first read or crawled our site that was our pages that were server rendered. And then after some time, it executed JavaScript in our pages. And something went wrong in the context when the JavaScript object executed, and the page got broken during this execution of JavaScript. So I was actually wondering whether you can confirm that a Google crawler would override the results of its page that it impacts during server side rendering with the result that it got during execution of JavaScript.

Answer 4:43 -  Probably. Probably we would do that. So if you deliver a server side rendered page to us, and you have JavaScript on that page that removes all of the content or reloads all of the content in a way that can break, then that's something that can break indexing for us. So that's one thing where I would make sure that if you deliver a server side rendered page and you still have JavaScript on there, make sure that it's built in a way that when the JavaScript breaks, it doesn't remove the content, but rather it just hasn't been able to replace the content yet.

Question 5:37 - OK. So the thing that worked for us-- and I'm wondering whether we did the correct thing-- was that, as you say with dynamic rendering, we tried to serve different variants of the page to Googlebots and to regular people. So we removed all JavaScript completely from the pages that we were serving to Googlebot. It seems that this works. And we have lots and lots and lots of broken pages in Google search. And now we are getting somewhat closer to normal. So I was wondering, is it OK for Google to get a variant of the page that is not quite the same than what the normal user is getting?  

 Answer 6:27 - It should just be equivalent. So if you're doing server side rendering and all of the functionality is in a static HTML version that you serve, then that's fine. Whereas, if you do server side rendering and just the content is rendered, but all of the links, for example, don't work, then that's something where we're missing functionality and we might not be able to crawl it as well. The other thing to watch out for, depending on how you remove those JavaScripts, is that a lot of structured data users JSON-LD blocks. So if you remove those two, then we don't have that structure data.

Question 7:14 - And a question related to that is that previously, some years ago, there was the idea that a search bot should have the same variant of the page as what a user gets, otherwise the situation bots describe this cloaking or something when a bot was getting something different from the user. And it was supposed to be bad. What's the situation with this now?  

Answer 7:47 - That's still the case. With dynamic rendering, what you're doing is providing the equivalent content in a different form. And in a sense, that's the same as you often do with mobile pages where the desktop version is shown and the mobile version is a slightly different one. And it's the equivalent content, the same functionality. It's just served in a different way. So that's more how we see it. It's more like dynamically served mobile page than actual cloaking. Cloaking for us is primarily a problem when the content is significantly different. When Googlebot sees something that's very different from what a user would see, either that the content is very different, maybe it's spammy and not spammy content, or that the functionality is significantly different that we can't crawl it properly, that we can't see the real layout of the page. All of these things make it hard for us to properly judge the page with regards to how a user would see that.

Question 9:03 - Hi, I have one if that's all right around mobile usability testing. So we noticed about a week ago that Google was flagging us for mobile usability errors around the content being wider than the viewport and clickable buttons being too close together. For the last couple of instances, every single page we've retested comes back perfectly fine. It's only the validation that seems to fail. From what I can understand, that looks like assets aren't being loaded. But we can never reproduce what appears to be failing during validation. And we also notice that through the logs, Google appears to be fetching assets that we haven't referenced for over five months. So it feels like it's trying to load a very, very, very old version on the page and can't find the matching assets. We've tried to patch it so any missing style sheets go to that current working one, but that still hasn't fixed it. Is this potentially related to the new changes coming in with the mobile-first indexing?  

Answer 10:16 - The last part is the easy one to answer. The mobile friendliness evaluation is completely separate from mobile-first indexing. So that's something that's being done completely independently. The indexing side is more a technical matter of switching things over to a mobile crawler. And using the mobile version for indexing and mobile friendliness is more a matter of recognizing when a page works well on mobile devices so that we can show it a little bit higher in the search results.

I think what you're seeing there is probably related to the things that you touched upon. On the one hand, it might be that we're looking at older versions of the page. When we index a page, obviously we index it in one stage, and then we need to be able to test the indexed version. And depending on the timelines there, it might be that it takes a little bit of time for us to connect the two sides. So there could be some delay between the indexing of the HTML page and the rendering, which means we'd ideally still be able to render that page a little bit later. I don't know if five months is reasonable there. I think that that feels a bit long.

In general, I'd still recommend redirecting the old resources to whatever new resources you have. In particular, if you use versions resourcing URLs. So if you have a URL that has a version number in them, and that version number changes every time you do a new push, if we can't access the old URLs to see some valid CSS, the JavaScript files that you use, then it's really hard for us to do rendering in a reasonable way there in that, when we render the page, if all of the resources don't work, then we see the static HTML page without the styling, for example. And then we don't know if it's mobile friendly or not. So some kind of a redirect from the old versions to the new versions of the embedded content-- that would be really useful.

The other thing that I've seen here is that sometimes it's more a matter of us just temporarily not being able to fetch those resources, which is more a matter of we'd like to crawl more from a website. But we temporarily can't do that because we have so many things lined up for that website. We don't want to overload it. And that's one thing I've seen sometimes trigger these issues in Search Console. And that, in general, is not a problem because we can line those resources up and crawl them a little bit later. But our systems internally will flag it like we couldn't test to see if this is really mobile friendly or not. And then Search Console is a bit too helpful and alerts you of all of these issues right away. So that's one thing we're going to work on to make sure that those alerts are a little bit more based on the stable version that we would use for indexing.

Question 13:32 - The first one goes back to 2011 where Peter Farr said that sites served with a CDN and are not treated any differently than non CDN sites. And the question is, is that still the case?  

Answer 13:44 -  And that is still the case. So for us, a CDN is not anything specific that we would explicitly call out and say, this is something that we need to treat specially, but rather just a way of hosting a website. And that's something that has implications with regards to how fast your website can load, where it's available to the audience, and all of that. That's perfectly fine.

Question 15:13 - I think the questioning kind of aims at the geotargeting aspect here. It goes on. Additionally, if I have a site hosted in a different country for my business using a CDN and I indicate in Search Console the country audience at the site is targeting, is that something I need to worry about?  

Answer 15:31 - That's perfectly fine. I think one of those aspects where a CDN makes a lot of sense-- where you have local presences with your CDN that make sure that your site is really fast for local users. And that's definitely not something that we would be against. So that's perfectly fine. In general, when it comes to geotargeting, we primarily use the top level domain. So if you have a country code top level domain, that's a strong sign for us that you're targeting a country. Or if you have a generic top level domain, the setting in Search Console. So with those two, we pretty much have things covered. And if your CDN has endpoints in different countries, that's not something we really worry about. If your hosting is in one country and you're targeting a different country, that's totally up to you. That's a decision that might make sense from your side maybe for financial reasons, maybe for policy reasons, whatever. As long as we can recognize the target country with either the top level domain or the Search Console setting, that's fine. The worry here is also that this setting isn't available in the new Search Console yet. So what does that mean? I think it's just a matter of time until the setting is also available in the new Search Console. We definitely plan on keeping this.

Question 16:58 - A question about Image SEO. We'll have a technical change in our shop that will change all of our image URLs. The compression of the images will remain the same. Does Google know this is the same picture? Or will we be losing rankings?  Should we set up redirects for image URLs?

Answer 17:22 - That's a good question. So yes, this will affect your website in Google Images. So in the image search, in particular, if we see changes in URLs with regards to embedded images, then that's something where we will have to go off and first recrawl those images, reprocess them, reindex them, get them all ready for image search again. So if you just change the URLs that are referenced within your pages, then that will result in those images being seen as new images first. And with regards to ranking, they'll have to work their way up again. So setting up redirects, like you mentioned, that's a fantastic way to do that. Because that way, we know the old images are related to the new ones, and we can forward any signals we have from the old ones to the new images. So that's really what you should be aiming for there. This is particularly relevant if a site gets a lot of traffic from image search. And it's something where image search is a little bit different than web search in that with images, we find that images tend not to change as much, so we don't recrawl them as frequently. So if you make significant changes in the image URL structure that you're using, it's going to take a lot longer for us to reprocess that. So in particular for images, you really need to make sure that those redirects are setup. And that's something that oftentimes you don't see directly because you load the page, it refers to the new image URLs. You don't realize that actually those redirects between the old image URLs and the new image URLs is missing. So if you get a significant amount of traffic from Google Images, make sure that you have those details covered.

Question 19:13 - If I change my complete website theme, will my rankings be stable or will they fall down? I will use the same content, the same URL pass, the same images, but the layout, JavaScript, and everything will change.

Answer 19:35 - So yes, this will result in changes in your website's visibility in Google. It's not necessarily the case that it will drop. It can also rise. So if you significantly improve your website through things like clearly marking up headings, adding structured data where it makes sense, using a clear HTML structure that makes it easier for us to pick out which content belongs together, which content belongs to the images, all that can have a really strong positive effect on your website and search. So that's something where you could see significant positive changes. And this is one of those areas where when you're working on SEO for your website, where you can make a big difference with regards to how we and other search engines would see your site. So it's not the case that your rankings will always fall when you make significant changes like this, but they will definitely change.

Question 20:36 - We're currently serving twice the content in the source code of our product pages. One block shows for desktop and the other one for mobile. One version is hidden with CSS depending on which site the user is on. Is that acceptable by Googlebot, or will it see it as malicious hidden content?  

Answer - 20:55 - So first of all, that's perfectly fine for Googlebot. We can deal with that. We can recognize when content is hidden and try to treat it slightly differently. However, it seems like something where maybe you're adding more complexity than you actually need, and where I suspect it's a bit trickier to maintain a website like that where you always duplicate the same content. So my recommendation there would be to try to find a way to use responsive web design to serve this in a way where you're not duplicating the content. That makes the pages a little bit smaller so they load faster, too. And then you don't have to worry about other search engines and how they might handle this. Again, from Googlebot's side, this is fine. It will probably result in both of these blocks being indexed for the same page. But in general, that's not something that would cause any problems.

Question 22:00 - If a company has different locations in different states, is it better to have different sites for each location or to have one main site with all of the locations on it with each pages having different schema structure to support the local SEO?  We want to target each state location.

Answer 22:22 - That's ultimately up to you with regards to multiple sites or one site. Personally, I recommend having one strong site rather than having it split up into multiple smaller sites. So maybe having one strong website for the company in general, and then individual landing pages for the individual locations so that people who are searching for those locations, they can find them and see what makes these locations a bit special. But at the same time, where you have one really strong website that is really easy to find in Google when people are searching for your company in general. So that would mean my recommendation there, you can, of course, if you prefer to have separate websites for these, that's something that's an option as well.  The thing I would watch out for here, though, is if you have separate websites is to think about how you want to deal with shared content. So in particular, if you have informational pages about the products or services that you offer-- if you're duplicating these across all of these different websites for the individual locations, then which of these pages would you like to have shown when someone is searching for something general, something about the product or service that you offer? And if you have that spread out across all of these different pages, then they're all kind of competing with each other. So it might make sense to pick one of these locations, say this is the primary location, and this is where I want to have all my general content indexed, and to have the other websites be a little bit more focused on the primary location and just listing their additional information there. So again, this is one of those things where, if you have separate websites, you're almost tending towards a situation where you are sharing a lot of content, and where you try to pick one main location anyway. So you might as well just make one really strong website and just have individual landing pages for the individual locations.  

Question 24:34 - After confirmation of using no index X-robots tag on the HTTP header for XML site maps, how would it deal with crawling frequency of XML sitemaps having no index?  

Answer 24:50 - So I think this is something that ended up being a little bit confusing to a lot of people. Unnecessarily so. So in particular, for us, an XML sitemap file is a file that is primarily meant for search engines to be processed automatically. It's not meant to be shown in search. And with that, it's something that we would treat differently than a normal HTML page. So normal HTML page, we try to process it, and see what it looks like, and pull out the links that are located there, and figure out how often we need to recrawl this page. But a sitemap file is essentially a machine file from one server talking to another server. And that can be treated completely differently. So we can fetch the sitemap file as frequently as we need. Servers can ping us a sitemap file and tell us, hey, this sitemap file changed, and will go off and fetch that sitemap file, and look at all of the contents there, and process that immediately. So that's something where a sitemap file is absolutely not the same as a normal HTML page on your website. Some sitemap generators have a fancy way of rendering the sitemap files so it looks nice in a browser. And that can be a little bit misleading because it quickly looks like something like an old sitemap HTML page that you might have had. But in general, an XML sitemap file is machine readable. It's meant for machines. It's processed very differently than an HTML page. So all of the effects that you're looking at for normal HTML pages, they would not apply to a sitemap file. Using the no index X-robots tag as a way to prevent the sitemap file from accidentally showing up in web search, but it has absolutely no effect at all on how we process the sitemap file for sitemap processing.

Question 27:00 -  I'm seeing we don't detect any structured data on your site in Search Console. It's a live website for two years and I know it has been there since the beginning. It should be part of our code. I can't see why it's not picking it up. What might be the reason?  

Question 28:44 - People submitting XML sitemaps in the old one and it's showing pending, but the new one couldn't fetch.

Answer 28:52 - Not quite sure how you mean that there because the processing of the sitemap file is not done in Search Console. It's essentially just the reporting that we do in Search Console. So in general, if it works in one version of Search Console, it should work and the other one. I think with regards to sitemaps, that's also one of the areas where in Search Console, we'll probably see some changes because we've moved a lot of the functionality around sitemaps to the index coverage report where you can select individual sitemap files and see the actual effect on indexing-- how many of these URLs were indexed, which ones were indexed, which ones were not indexed. You can see all of that directly in Search Console, which I think is pretty neat. So I would imagine over time, we would take the old sitemaps report and turn that off, and try to make sure that we can move as many of the features as possible to the new Search Console.  

Question 30:22 - How can I rank well on Google?  

Answer 30:23 - That's kind of a broad question. I don't really have a generic answer that covers everything that you can do to rank well on Google. I think if you're coming at it at this level with regards to I don't know what to do, I don't where to start, one thing I would do is look at the SEO starter guide, which we moved now to the help center. That covers a lot of the basics around SEO and how to rank well. How to make a website that works well on Google. There are also a bunch of other SEO starter guides out there that are not from Google which are very good. So I'd look around, and go through some of those guides, and see the different aspects that are involved when search engines look at content. Look at web sites and what they would find important and interesting there with regards to understanding how to show these pages to users.

Question 31:26 - Is it OK to have a single sitemap containing the items plus images? Or is it better to have separate sitemaps for items and images?  

Answer 31:38 - Both of these approaches work. For us, what happens on a technical level is we take all of the sitemaps that we found for your website and we combine them into one big pile, and then we process them there. So how you split that information up into separate sitemap files is mostly up to you. It is something that you can pick a way that works well for your setup, for your DNS, for your server, for your infrastructure, whatever you have. So that doesn't really matter for us.

Question 32:12 - We're using a website builder startup called Ucraft.  I've been following your Hangouts and was asking questions about ranking for logo maker. There are zero issues on our website according to Search Console. We're providing fast performance on mobile and great UX. I'm unsure of what to do to help improve the rankings.

Answer 33:41 - Another thing to keep in mind is that just because something is technically correct, doesn't mean that it's relevant to users and the search results. That it doesn't mean that it will rank high. So if you clean up your whole website and you fix all of the issues, but, for example, if your website contains lots of terrible content, then it still won't rank that high. So you need to, on the one hand, understand which of these technical issues are actually critical for your website to have fixed. And on the other hand, you really need to focus on the user aspect as well to find whatever issues that users are having, and how can my website help to solve those issues or to help answer those questions?  And that aspect is sometimes a bit tricky. So that's not something that I'd say is always trivial. And for some of these niches, there is a lot of really strong competition from people who have been working at this for a long, long time. And that can make a quite a bit more difficult than something that has a lot less competition. So unfortunately, no simple answer to getting high rankings and lots of traffic.

Question 35:00 - I have two websites, both started using HTTPS six months ago with two different results. We did 301 on both of them. Site A, which gets less than 10 clicks on the HTTP version, all of the traffic is now on HTTPS. Site B still gets 1,000 traffic on the HTTP version. Is this telling me that site B has some issues somewhere or not?  

Answer 35:31 - So in general, it doesn't mean that they're necessarily issues. But the thing to keep in mind is we do look at websites individually. So our algorithms don't necessarily say, oh, these are two websites from the same person. Therefore, they should be doing exactly the same thing. It can be that we still see issues with one website and that affects significant amount of traffic. It could be that we see the same issues with the other website, but because nobody is searching for those pages, you don't really notice. So that's something where I suspect if you're still seeing one third of your traffic going to the HTTP version after six months of moving to HTTPS, that something somewhere is still not lined up properly. And it could also just be that the way that you're measuring these things is not set up properly. So I would certainly recommend diving into that and figuring out what is happening here. Why are these redirects not working? Or why might Google be showing the HTTP version of these pages?  Or are people going to the HTTP version in some other way? All of those things might be different aspects that you could take a look at there.

Summary: It can be normal to see the odd http visit for one reason or another, but if after a significant amount of time you are still seeing a good portion of your traffic going to http versions of your pages, something is likely wrong.

Question 36:56 -  If a site is blocking redirects and robots.txt that lead off the site and those URLs are getting indexed and ranking, could that impact the site negatively from an SEO standpoint?  

Answer 37:13 - So I think there are multiple angles here that come into play here. On the one hand, if those are redirects that are leading to content that we could otherwise index, then it might be that we're having trouble finding that content and indexing that content. So that definitely would affect the website from an SEO point of view. However, that applies more to the content that's being redirected to. So if the content that's being redirected to is a different web site, then that other website would be affected from blocking those redirects. If you're talking about the website that is doing the redirect and those redirects are going to somewhere else, then, in general, that wouldn't be such a big issue in that we see lots of URLs across the web that are blocked by robots.txt and we have to deal with that. However, that does mean-- or that could mean that these URLs which are blocked, if they're being shown in search, they're competing with the other pages on your website. So for example, if you have one page about a specific type of shoe and a redirect that's linked from there going to the official affiliate source where people can buy it, and that redirect is also being indexed and blocked because of the robots.txt block, then your page about the shoe and that redirect that's leading to the store for that shoe, they're kind of competing with each other. So it's not necessarily that we'd say this is a sign of low quality, it's more that, well, you're competing with yourself. We can't tell what that page is that you're linking to. Therefore, we might show both of these URLs and the search results. And depending on how you care about that, it might be that we're showing one that you prefer not to have shown in search. So that's the main aspect there. I wouldn't see it as something where it's an issue from a quality point of view, but more a matter of you're competing with yourself.

Question 39:32 -  Does Google negatively rank any search terms related to marijuana, medical marijuana, CBD oil, et cetera?  

Answer 39:42 -  I'm not aware of anything like that. So in general, the thing to keep in mind is that we can't really negatively rank any search term because if we move all of the results down by 10, then we'd still be starting with the same list. So there is no general way to say, well, we will rank these lower for these queries because then if everything starts a little bit lower, it's still the same list, right?   So I am not aware of anything specific that we're doing there. I could imagine that we have elements in our search systems that say, well, these might be topics that are particularly critical for people where people care about finding really strong, good information, so we should be a little bit more careful with regards to what we show there. We shouldn't just randomly show pages that just happened to mention these words as well. So that might be something that plays a role there. But it's not definitely not the case that we would just generically demote everything with regards to these queries because like I mentioned, then we essentially just have the same list again. It's not that we would say, well, the first page of search results is empty. And we essentially wouldn't really change much.

Question 41:11 - In Search Console under the mobile usability report, there are a number of valid URLs compared to the number of indexed URL pages indicating how close or how far you are from the mobile-first index.

Answer 41:28 - So first off, again, mobile usability is completely separate from the mobile-first indexing. So a site can or might not be usable from a mobile point of view, but it can still contain all of the content that we need for the index. So an extreme example, if you take something like a PDF file, then on mobile that would be terrible to navigate. The links will be hard to click. The text will be hard to read. But all of the text is still there, and we could perfectly index that with mobile-first indexing. So mobile usability is not the same as mobile-first first indexing.

Question 42:13 -  And I guess the question goes on. If only 30% of my currently indexed URLs are shown as valid in the mobile usability report, does that mean Google smartphone is having an issue crawling my site to find the remaining URLs?  

Answer 42:27 - No. No. That can be a little bit confusing with some of the reports in Search Console in that what you're primarily looking at there is to see if there is significant amount of problems with the pages that we find there. Not necessarily to see that the total number of indexed matches the total number shown across all of these reports. So in particular, reports like the mobile usability report-- I believe the structured data reports as well-- they're based on a significant sample of the pages on your website. They're not meant to be comprehensive, complete list of all of the pages of your website that are indexed. So if you see 1,000 URLs that are shown as indexed in the index coverage report, that doesn't mean you should see 1,000 URLs across the different reports within Google, within Search Console. It might be that you see 500 or 100 being shown in the mobile usability report. And as long as those that are shown in the report are all OK, then you should be all set. So it's not something where you need to aim to have the same number across all of these different reports.  Hearing me say this now feels like this is a bit confusing. So maybe we should make that a little bit clearer in Search Console, though. I can see how you might be tempted to say, well, if it says 1,000 here, I need to have 1,000 everywhere. Maybe we need to be a little bit clearer there. That's a good point.

Question 44:10 - We have a website with a Swedish country code top level domain, .se. We do business not only in Sweden, but also worldwide. Our business is about outsourcing software development services. It doesn't make sense from an SEO point of view to migrate from a country code top level domain to a generic top level domain like .com. Would it be easier to rank in search for generic queries?  Is it possible to rank high in search with a Swedish domain in countries other than Sweden?

Answer - 44:45  That's a good question. That's something I hear from time to time. In general, if you have a country code top level domain, your site can still be very relevant globally. So if you're targeting a global audience and you have a country code top level domain, that's perfectly fine. That's not something that you need to change. However, if you're targeting individual countries outside of your own country, then with your country code top level domain, you wouldn't be able to do that. So for example, if you have a Swedish top level domain and you explicitly want to target users in, say, France because you have a special offering that works particularly well for users in France, then you wouldn't be able to geotarget for France with a Swedish top level domain. However, if you're saying my services are global and everyone worldwide should be able to take advantage of them, then you wouldn't see any change by going to a global top level domain versus your country code top level domain. The bigger effect that you might see there, though, is more about from a usability point of view, from a user point of view. If they see a Swedish top level domain in the search results and they're not based in Sweden, would they assume that this might not be something for them?  That's something you can help to resolve by using clear titles in English, for example, or in whatever language you're talking to the user about, and generally making sure that your site is clearly seen as a global web site so when people go there, they realize, oh, this is actually a global business. They're based in Sweden, which is really cool. But it's not the case that they would refuse my request for a service or a product if I had something that I wanted from them. So from that point of view, you don't need to go to a global top level domain if you're targeting a global audience. If you want to target individual countries, then yes, you would need to have something where you can set geotargeting individually. So that would be something like a .com, .eu. All of the newer top level domains, they are also seen as global top level domains.

Question 47:19 - When we have to block content for certain US state IPs due to state laws, is it OK to exclude Googlebot from the restriction? Though it's technically cloaking, we want to be searchable for users from certain US states while not showing content for users from other US states. What's the best way to approach this situation?  

Answer 47:47 -  So yes, that would be cloaking. In particular, Googlebot generally crawls from one location. And if you're looking at a US based web site, we would be crawling from the new US. Generally speaking, the more common IP geodatabases tend to geolocate Googlebot to California, which is where the headquarters are. I don't know if all of the Googlebot data centers are actually there, but that's where they tend to be geolocated to. So if you're doing anything special for users in California or in the location where Googlebot is crawling from, then you would need to do that for Googlebot as well. So in particular, if you see Googlebot crawling from California and you need to block users from California from accessing your content, you would need to block Googlebot as well. And that would result in that content not being indexed. So that's something that, from a practical point of view, that's the way it is. How you deal with that is ultimately up to you. One approach that I've seen sites do, which generally works, is to find a way to have content that is allowed across the different states and in the US so that you have something generic that can be indexed and accessed by users in California and that we could index as well and show in the search results like that. Or alternately, if California is in your list of states that are OK to show this content to them, that's less of an issue anyway.

The other thing to keep in mind is that we can't control what we would show in search results for individual states. So if, for example, your content were allowed in California and we can crawl and index it there, we would not be able to hide it in, say, Alabama or in any of the other states because we wouldn't know that it wouldn't be allowed to show there. So it might make sense to block the cache page. If it's really a problem that people are not allowed to access this content at all, you might need to think about things like a description, maybe using a no snippet meta tag even in extreme cases if even just the description might result to that content being shown. All of these things are a lot harder when you have restrictions like this. And the same applies globally as well where if you have restrictions across different countries. You need to show Googlebot the content that users in those countries would see when Googlebot crawls from those locations. And, again, we generally crawl from the US. So anything that is blocked to all users in the US, you would need to block to Googlebot as well and maybe find an alternate way of having content that is generic enough that would be allowed in the US that we could use for indexing. And similarly, you'd need to understand that we still would show that content if we can index it, to users worldwide.

So if there are any legal restrictions that you have with regards to that content being seen at all in other locations, that's something you might need to deal with and find ways to restrict that properly, like with a no snippet meta tag, by blocking the cache page in search, whatever methods that you need to do there. It might just be that you need to show an interstitial, which might be a really simple way to deal with that. But all of these legal topics are really tricky from our point of view. We need to make sure that we see the same content as a user from that location. That's our primary requirement. And after that, everything else is more a matter of between you and whatever legal guidelines you need to follow there. That's not something where we could tell you what you explicitly need to do to make sure that this content is compliant.

Question 53:09-. Should we wrap our logo in an h1 tag?  

Answer 53:12 - You can do that. I don't think Googlebot cares any particular way. In general, the h1 is the primary heading on a page, so I'd try to use that for something useful so that when we see the h1 heading, we know this is really a heading for the page. If for semantic reasons you just want to mark up an image without any alt text or without any text associated to the image, that's kind of up to you. You can also have multiple H1 elements on a page. That's also up to you. That's fine for us. So that's not something where our systems would look down on a page that's using an h1 tag improperly or that doesn't have an h1 tag.
Question 54:26 - According to a former question, Google can deal with a single page which has the same schema information twice and microdata on JSON-LD. But what does Google do if the information is different?  

Answer 54:40 - So this is a general SEO type thing where if you're giving us conflicting information, you shouldn't be surprised if we don't know what to do with that. So if you're giving us information and you want us to do something specific, then make it as obvious as possible what you want us to do. And don't frame it in a way that Google should be able to figure out which of these elements is the right one and ignore the other one. It should really be as obvious as possible. So if you're using different structured data elements, and their markup in different ways and they're kind of conflicting because they're not complete individually, then maybe we'll get it. Maybe we won't get it. I wouldn't rely on us being able to interpret something that's a bit messy. Make it as clear as possible, and then we'll be able to follow it as clearly as possible.  

Question 55:56 - So two quick questions. One is actually related to what you mentioned about the cTDLs So are you saying that, for example, I have a .ro website for our agency. If I want to target the international market people from the United States or Canada, I wouldn't need to go to a .com domain or anything like that if I have--

Answer  56:23 - Yeah.  

Question 56:25 : OK. And does the cTLD still help you get better visibility in that specific country? Because that usually used to be the case.

Answer 56:34 - Yeah. Yeah.

Question 57:21 - And one more regarding canonicals. We've noticed for one of our websites that Google seems to be ignoring our canonicals and choosing the expected URL role as a self-referencing canonical. Just to give you an example of this URL, which is a filtered URL and it has a canonical to another URL. And Google seems to index both of them and show both of them in search. And I just can't figure out why. One reason I might think that might be happening is that Google Tag Manager inserts an iframe in the head section. So I fear that Google might be not seeing the canonical. Although, Search Console does report the canonical as being there, it just chooses to ignore it. The Google selected canonical is the expected URL instead of our declared canonical. And it's being shown in the search. I'm not even sure what URL is being formed by Google.  And yeah I'm not really sure why it's showing up and why it's not being canonicalized.

Answer 58:41 - OK, let me see if I can spot something there. So I think one of the things to keep in mind is also that this site is on mobile-first indexing. So if you're doing something special with the mobile version, that might make it a little bit trickier… It's not the only URL that's the problem. We have a lot of indexed but not in sitemap URLs. Most of them have correct canonicals, they're just not being-- Google kind of doesn't trust our canonicals, something like that.

Answer 59:46 - I think and in this case, it is probably just also something that is seeming as an irrelevant URL, so we don't put a lot of effort into figuring out how to index it. So it's indexed, but it's probably not that visible in search I would imagine.  

Question 1:02:07 - And also, very quickly regarding to this. It's not about canonicals. It's about rel next rel prev. We also see that in terms of categories, there are oftentimes where for the same site, Google seems to pick page five of the category each week or when searching for keywords are in the meta title. So that wouldn't change from one page to another. But Google still seems to pick a deeper page rather than the first page of the category. Is that also something that--

Answer 1:02:40 - That can be normal. That's something where we try to use rel next rel previous, you understand that this is a connected set of items. But it's not the case that we would always just show the first one in the list. So you could certainly be the case that we show number three, number five, or something like that for a ranking. It can be a sign that maybe people are searching for something that's not on the first part of your list. So if it's something important that you think you'd like to be found more, than maybe you should find a way to restructure the list to put the important items in the front. Make it a little bit easier to have users find them when they go to that list. And accordingly, when we look at that list, usually the first page is highly linked within the website. So we could give it a little bit more weight if it just had that content that we thought was relevant.  

Question 1:04:09 - I have a question about emojis in their CRP. We're trying to experiment with them on our CRP, and I noticed one of our competitors has emojis in their CRP. So we tried to add it, but it doesn't appear to show up for us. We know that Googlebots picked up the numeric description, but it's choosing not to show it. What are the metrics Googlebot uses to figure out whether or not to display the emojis?  

Answer 1:04:41 - I don't know. So I think there are two aspects there. We don't always show exactly what is shown-- what is listed in the description or in the title. So that might be playing a role there. With regards to emojis, we also filter some of these out in the search results. So in particular, if we think that it might be misleading, or looks too to spammy, too out of place, then we might be filtering that out. So depending on what you're showing and what you're seeing otherwise in the search results, if the same emoji is being shown for other sites, then we could be able to show it for your site as well. It's probably just a matter of us updating the title or description, and picking that and actually showing that to users.

Question 1:06:40 - About Search Console inspection tool and the info operator. Are both methods suitable to assess the canonical URL? Will they sometimes return different results? Is there a delay?  

Answer 1:06:57 - For large part, the info operator does show the canonical URL, but that's a little bit less by design and more accidentally. So if you want to see the current canonical that we pick for a URL, you should really use a URL inspection tool. That's something where I know the inspection tool is specifically designed for that. And the info tool could be a bit tricky there in that it's more something that we try to use for a general audience. So it's not really a technical SEO tool, per se. So that can also change over time. So if you need to figure out what the current canonical version is, definitely use the inspection tool, not just rely on info operator, or the cache page, or site query, anything like that.