This was a really good hangout! We learned a lot by listening to this. We're playing with a bit of a new format here in which we include quotes from John Mueller that will be really easy for you to reference and use. We've also included a brief summary of each important point to help those who don't have time to read this whole article.

Does it help to have unique images on a site?

0:39

John was asked about a news site that uses images that are published on many other news sites.

John Mueller help hangout Nov 2018"I think having a unique photo is definitely a good idea because if it’s the same photo that is reused across a number of the different articles, we’ll pick one of those articles for image search to show as the landing page for that. So you’re kind of in the same group as everyone else if it’s the same photo. Whereas if it’s a different photo, then we can show it separately in Image Search. But that’s specific to Image Search, it’s not the case that if you have good images that they will make your site rank better in web search. So, it’s kind of separate there but that’s something where sometimes good images show up as well in the normal search results like where you have the images, one bar on top or something like that, so I think if you have a chance to have your own images then I think that is definitely worthwhile. And I guess with regards to object recognition, one of the things there I would definitely make sure that you tell us what these images are about as clearly as possible. So with the alt text, the the caption, all of the usual things.”


Summary: If you use unique images, you are more likely to appear for image search.


 

Good info on "crawled but not indexed" pages in GSC

2:46

The question asked was about what to do with posts that are in Google's Index Coverage Report marked as "crawled, but not indexed". We look at this section of GSC for our site quality reviews as often this is a great place to find content that is thin. John said that it can be completely normal for a site to have pages that Google has crawled but decided not to index. We should not go and immediately noindex all of these.

John Mueller help hangout Nov 2018"So that’s something where we might crawl these, we may may look at them once and say, ‘oh, this is interesting,’ but then maybe we don’t keep them in search for the long run. So in particular, things like index pages or archive pages where actually the content is already indexed, and it’s just a matter of a page that links to this content, that’s something we don’t necessarily need to have indexed, so we might crawl but not index them."


Summary: It can be normal for pages to appear here. But, pages that contain content that is already indexed, for example, may be seen here.


 

 

How long does it take for Google to recognize a change in structured data?

5:50

A site owner asked about a situation in which they accidentally put the wrong phone number in their structured data.

 

John Mueller help hangout Nov 2018

"One thing that you can do is use the Submit to Indexing feature in fetches Google or I believe in the Inspect URL tool to let us know that this page has changed and that we should reprocess it. However, the things that are more of a secondary nature from the content, so not specifically, or not immediately tied to the indexing of the text on the page, the title, the URL, those kinds of things, those sometimes take a bit of time to get bubbled up again. So if we’re showing this phone number in maybe the knowledge panel on the side, or sometimes if it’s an image, or something that’s embedded within a page, then all of these things take a little bit longer than normal in the recrawling of the HTML page we kind of have to reflect all of those pipelines were on as well, and sometimes that can take a week or so. I would definitely give it some time.”


Summary: It can take a week or so.


Important info about legal interstitials

12:30

What happens if you legally need to use an interstitial such as an age verifier?

 

John Mueller help hangout Nov 2018

"The important thing with all these kind of legal interstitials is that Googlebot is not able to click through to them. So if you’re doing something where you’re redirecting to another page for this interstitial then going back to this primary content afterwards, setting a cookie maybe, if the user enters a date that’s old enough, that wouldn’t be something that Googlebot would be able to crawl through. So, with regards to that, if you had to set up this configuration where you redirect to a different page and then used that click to come back to with a cookie to the main content, then I would assume we would not be able to index any of that content which is probably not what you’re trying to do."

 

The important point? Make sure that Googlebot can see your content rather than just the interstitial. A good way to check this is to use fetch and render in GSC.

John goes on to give some possible solutions including making it so that the interstitial content is in a div that overlays the page, but in a way where Googlebot can still crawl the regular content. There are some JavaScript solutions as well.


Summary: Make sure Googlebot can see your actual content.


 

Interesting quote on linking externally

17:14

A site owner asked this question:

"When a person writes an article, does it help the page to rank better to link to another site on the article. For example, if I write about adjustable beds on my site, and I linked to a manufacturers site, does that help with pages ranking in Google or help the understanding of the topic a little bit better?"

John Mueller help hangout Nov 2018"It does help us to understand the context of the page a little bit better but it doesn’t really affect it’s ranking immediately. So it’s not the case that if you link to a handful of good sites that suddenly your pages will rank higher, because we think, ‘oh your site must be good as well’. This has become, I don’t know, completely popular tactic that used to be done quite a bit which maybe 15 years ago, where people would put up this completely spammy page and then on the bottom they would like to CNN, or Yahoo, or Google, or something, and assume that because there is some sort of link to these legitimate sites, then suddenly this spammy content would become more worthwhile to search engines. And at least at the moment, that’s not the case."

We do feel that legitimately linking out externally to helpful sources is something that can help improve rankings. We're not talking about randomly placing in links to the odd .gov site or Wikipedia page. Rather, we think that if you have links on your pages that people actually click on, this can show Google that people are engaging with your site.

Also, this supports our thought that Google wants to see external references and citations for sites with medical information.


Summary: Authoritative external links coming from your articles may help users, but according to Google are not a direct ranking factor.


Not seeing rich snippets in the search results?

41:40

 

John Mueller help hangout Nov 2018"If we’re not showing them at all and they’re just shown as warnings in search console then I would assume that it’s not a matter of the recipe markup being bad, or kind of missing information. But more a matter of our systems just not understanding the quality of the website yet. That’s something where what you can sometimes do is a site query for a website where sometimes you’ll see the rich results shown and the site query but not for normal query results. And that’s a pretty strong sign that we can understand that these markups are there. But we’re just not showing it because we’re just not sure about the quality of the website."

 

He mentioned that it can take time for a new website to be seen as of high enough quality for Google to want to display rich snippets for it.


Summary: This could be a sign of Google thinking your website is low quality. Or your website may just be really new.


What does it mean if you have not yet been moved to mobile first indexing?

44:33

John Mueller help hangout Nov 2018"If it’s not moved yet it may just be that it hasn’t moved yet because we haven’t gotten through to the whole web. Or it may be that there are still some issues that you can look at in regards to mobile first indexing. Usually this falls into things like the text not being complete on the mobile version. Maybe the embedded content not being as complete. So images and videos not indexable in the same way. Maybe images are blocked by robots.txt, those kinds of things missing. Or maybe structured data where structured data might be missing on the mobile site but it’s actually there on the desktop site. There are some other criteria as well that we look at but those are kind of the three main things that we see issues with a lot for websites. If the website is responsive design then those issues don’t play a role at all."

John also mentioned that it's possible that in the future, we will be able to see a message in GSC telling us what the problems were that are holding the site back from being moved to mobile first indexing.


Summary: It may just be that Google has not gotten to your site yet. It could be that you have issues that make your content dissimilar on mobile as compared to desktop.


 

Is it ok to cloak affiliate links?

47:17

There are lots of ways to do this. What the poster was asking about is the type of thing where you hover over a link and see that it goes to something like, mariehaynes.com/go/affiliate-product, and then when you click on it, that url redirects you to the vendor.

John Mueller help hangout Nov 2018"I think people have been doing this for a really long time. There are various tricks that they use to kind of swap out the links and the URLs that are linked there. As far as I know the webspam team and the quality teams, they’re aware of these techniques as well. It’s something where usually what I see happening is that the webmaster comes up with this complicated scheme to hide the affiliate links and in the end, nothing changes. You add all of this complexity and nothing changes. From my point of view, I would just use a normal nofollowed affiliate link and leave it at that. There is no reason to try and hide that more than that. I don’t think it helps the website or the linked to website in any way… There’s lots of these weird schemes. I’ve never seen one where I tell you “Oh wow! They did something really fancy and they had an advantage from that”. Usually it’s like “Oh wow! So fancy and nothing changes”.


Summary: It's not likely to help. And it may annoy users.


 

Thinking of starting a website in a competitive niche?

Then read this!

 

John Mueller help hangout Nov 2018"I don’t really have an answer that kind of lets you jump over someone who has already built up a very strong presence online in those niches. That’s something where you basically just have to keep working at it or another approach that a lot of sites take is to say “well, this particular query is very competitive and it would very hard for us to rank very high in these search results for this query. Maybe we should focus on a variation that is a little bit more specific or that is a unique twist of this query that we think people will want to search for in the future that currently isn’t being covered by this strong competitor”.

 


Summary: It's hard to rank against well established competitors unless you can focus on a variation that is unique to you.



If you like stuff like this, you'll love my newsletter!

My team and I report every week on the latest Google algorithm updates, news, and SEO tips.


Full transcript

Question 0:39: I’m curious about the object recognition within images of what Google sees. So for a news site, if you use regular press images that a lot of news sites publish on the same article, would we, do you think, because it might be more valuable too if the readers see a unique photo, that Google doesn’t recognize on articles of the same topic. What is your take on that?

 

Answer 1:08: “I think having a unique photo is definitely a good idea because if it’s the same photo that is reused across a number of the different articles, we’ll pick one of those articles for image search to show as the landing page for that. So you’re kind of in the same group as everyone else if it’s the same photo. Whereas if it’s a different photo, then we can show it separately in Image Search. But that’s specific to Image Search, it’s not the case that if you have good images that they will make your site rank better in web search. So, it’s kind of separate there but that’s something where sometimes good images show up as well in the normal search results like where you have the images, one bar on top or something like that, so I think if you have a chance to have your own images then I think that is definitely worthwhile. And I guess with regards to object recognition, one of the things there I would definitely make sure that you tell us what these images are about as clearly as possible. So with the alt text, the the caption, all of the usual things.”

 

[check submissions]

 

Question 2:46: I see in my Search Console that many of my blog pages have been crawled but not indexed since the last major update. These are a lot of indexed pages which list different articles and an archive index, a portion of our how-to articles. Should we change these archive URLs to be followed, noindexed, or change the structure of the blog, split it into sections with relevant articles as opposed to just a blog section?

 

Answer 3:19: “So I think first off it’s important to know that this situation where we’ve seen URLs and we just don’t index them, that’s completely normal. That’s something that happens to pretty much all websites where we can find a lot of different URLs on a website if we crawl deep enough but that doesn’t mean that these URLs are necessarily useful with regards to search. So that’s something where we might crawl these, we may may look at them once and say, ‘oh, this is interesting,’ but then maybe we don’t keep them in search for the long run. So in particular, things like index pages or archive pages where actually the content is already indexed, and it’s just a matter of a page that links to this content, that’s something we don’t necessarily need to have indexed, so we might crawl but not index them. In the new Search Console this is a little more visible, so that’s something where suddenly people are seeing this and were wondering ‘how do I fix this?’ but it’s really not something that you need to fix. It’s simply normal when it comes to search. We just don’t index everything that we’ve seen.”

 

Question 4:38: Can I use non-English language in the image geolocation tag in the image sitemap?

 

Answer 4:46: “Yes you can use non-English language there. What I would do as a, kind of, a rough approximation if it works, is just try that text in Google Maps and if that text in Google Maps points at a specific location, then you can be pretty sure that we can figure out what that location is is. That said, I don’t know how much weight we would give a geolocation tag in image sitemaps. That’s something that I rarely see sites use so it’s very possible that our algorithms don’t actually rely on that too much, but if you can specify it, why not.”

 

Question 5:30: Need help with a structured data issue. We added our phone number to/in structured data for organization and customer service, however, it was the incorrect phone number. We’ve since updated it, but how long does it take for Google to make this change in the structured data?

 

Answer 5:50: “So, if you’ve made an update in the structured data there then that is essentially the right thing to do. One thing that you can do is use the Submit to Indexing feature in fetches Google or I believe in the Inspect URL tool to let us know that this page has changed and that we should reprocess it. However, the things that are more of a secondary nature from the content, so not specifically, or not immediately tied to the indexing of the text on the page, the title, the URL, those kinds of things, those sometimes take a bit of time to get bubbled up again. So if we’re showing this phone number in maybe the knowledge panel on the side, or sometimes if it’s an image, or something that’s embedded within a page, then all of these things take a little bit longer than normal in the recrawling of the HTML page we kind of have to reflect all of those pipelines were on as well, and sometimes that can take a week or so. I would definitely give it some time.”

 

Question 7:08: I have a question regarding this structured data. So I have seen a lot of websites actually harvesting all these reviews from their product pages...so just wondering is this is the right approach or is this kind of bad practice to add schema and then further get those star ratings in the searches?

 

Answer 7:38: “In general, that’s fine if you watch out for the policies we have for structured data for the rich results then I would go for that, that sounds good.”

 

Question 8:02: My question revolves structured data as well...we have a couple of tv shows that are being displayed on a couple of brands of ours and unfortunately the panel gets mixed up and sometimes it displays the wrong brand for the wrong logo or the wrong link to the wrong site of our tv show. I was wondering how we could take charge of that? Maybe manually? Nothing we have tried seems to have worked.

 

Clarification: Where are you seeing this? Is it the logo image?

 

Reply: I put a link in the description/comments box

 

Answer 9:34: [Without permission to view the file] “Usually if it’s something that is in the knowledge graph I think as well it kind of goes in the same direction that it just takes a little bit longer to be updated. However, if you made these changes for a while now, then maybe that’s something that we just need to pass on to the team as well. For some of these kind of informational things that we show in the knowledge panel it’s also making sure that everything aligns. So for example, if you have a Wikipedia page on these tv shows or on these brands, make sure the correct logo is linked there as well so that we can really see that everything kind of matches together and we can trust the information that’s provided.

 

Follow up: Consistency...we actually took care of that but nothing seems to have changed. It might be because we have the same show on different brands, but somehow it got tangled up.

 

Answer 10:33: “That can make it trickier, yeah, if we can’t find this kind of 1-to-1 mapping between these different items then maybe we’ll show the wrong one or maybe we’ll assume that all of these brands are actually the same, [or] that we could use them interchangeably. Sometimes it's the case, but maybe not in your case.”

 

Follow up: You should be able to access [the file] now

 

Answer 11:06: [With granted access] I don’t know, I’d have to double check.

 

Question 11:22: Is voice now one of the ranking factors? Should we rely on it or simply rely more on speed and mobility of a website?

 

Answer 11:35: “So I don’t know how we would make voice the ranking factor so that’s one part. I think over time people will use voice more and more to search. And that’s something to, I think I would try to watch out for that but it’s not something where at the moment I’d say you’d have to do something specific for voice search. For the most part, completely normal websites, if we can understand their content for other things in search, we should be able to understand the content for voice search as well. And depending on the type of query that is given by voice search, maybe it makes sense to show a website, maybe it make sense to show an answer, all of those different things. So I wouldn’t see voice as being a ranking factor on its own at the moment.”

 

Question 12:30: As a follow up with regards to the age gate tweet, with regards to a pop up that you can do if you need to do before or checking into a users age.

Answer 12:49: “Uh I think, for the most part, that’s still a good approach to take. The important thing with all these kind of legal interstitials is that Googlebot is not able to click through to them. So if you’re doing something where you’re redirecting to another page for this interstitial then going back to this primary content afterwards, setting a cookie maybe, if the user enters a date that’s old enough, that wouldn’t be something that Googlebot would be able to crawl through. So, with regards to that, if you had to set up this configuration where you redirect to a different page and then used that click to come back to with a cookie to the main content, then I would assume we would not be able to index any of that content which is probably not what you’re trying to do. Um, there might be situations where you say well, this is the only way I can do it so maybe you have to take that into account, uh but in general if you do want to have that content indexed, you need to present that age interstitial in a way that Googlebot can still crawl the normal content. So that could be something like an html div that you overlay on a page, but keeping the rest of the page essentially still loaded so that Googlebot can still see that. I image that’s probably the best approach at the moment. You could also do something with JavaScript to load an interstitial, which from our point of view should work fine as well. Again provided that the normal content is still in the html afterwards. Um, if you need to do a type of interstitial that really goes to different page and that doesn’t load any of the normal content in between, then what you might want to do is say, well this content won’t be indexed, but maybe I can make a simpler version of this content that doesn’t need to be behind an age interstitial. So perhaps you have a more descriptive page of the type of services you’re offering, you say well this is content I can show to everyone, and I can get indexed, and from there users can click and go to my other content that is behind an age interstitial or country interstitial or whatever you need to do there. So those would be the primary recommendations that we would have there.”

 

Question 15:25: the quality score metrics on Google PPC show above average landing page experience notification and click-through experience, can this be used as an indication that our landing page for a particular set of queries vs landing pages is already optimized for SEO or for Googlebot?

 

Answer 15:49: “The simple answer here is that the ads related tool to the adwords landing page test that the quality scores that you have there, they are not related to SEO. These are completely different systems on our side and from an ads point of view we may say that everything is fine, and from a search point of view we may say that we don’t think this content is actually relevant. So those are completely different things. Sometimes the ads landing page score can give you some information about things that you can improve. But they’re essentially completely different, they’re something that you need to take account in completely different ways.

“The second part of your question - essentially says that’s ok, yeah, because like these are different length, these are different bots. On the one hand, but it’s not so much that they’re different bots, it’s really more that they’re completely different systems on our side. And like you mentioned, you could easily have an ads landing page that’s set to noindex, which would be perfectly fine for an ads landing page but obviously not-indexable at all from an SEO point of view.”

 

Question 17:14: When a person writes an article, does it help the page to rank better to link to another site on the article. For example, if I write about adjustable beds on my site, and I linked to a manufacturers site, does that help with pages ranking in Google or help the understanding of the topic a little bit better?

 

Answer 17:35: “It does help us to understand the context of the page a little bit better but it doesn’t really affect it’s ranking immediately. So it’s not the case that if you link to a handful of good sites that suddenly your pages will rank higher, because we think, ‘oh your site must be good as well’. This has become, I don’t know, completely popular tactic that used to be done quite a bit which maybe 15 years ago, where people would put up this completely spammy page and then on the bottom they would like to CNN, or Yahoo, or Google, or something, and assume that because there is some sort of link to these legitimate sites, then suddenly this spammy content would become more worthwhile to search engines. And at least at the moment, that’s not the case. So if you write content, your content should be able to stand on its own, the links definitely help users, so if you link to other content that you think is relevant for the user at this point, maybe the problem, or the issue they’re trying to solve, that’s a good thing. With regards to follow or nofollow, use your normal techniques there. So if it’s a paid link you’re putting there because theres a relationship there, then obviously use a nofollow. But otherwise, feel free to use a normal followed link for something that’s normal content that you’re just referring to. So from my point of view, I think the primary value there is really for the user and that the user comes to that page and they see a comprehensive view of this topic, they have more information they can follow up on if they want to, but essentially they have everything that they need there.’

 

Question 19:25: I have a question about embedding third party reviews on our websites such as reviews from Google, Yelp, Facebook. A third party provider offers this bit of JavaScript-based script that you can put on your site that’ll display the live reviews from the entities above. However, when you look at the page’s source, the reviews are not included in the page source, only in the JavaScript. What is Google’s view on this, is this acceptable, will Google frown upon it? So, from a purely search point of view, it’s fine to have reviews on your website that come from some other places…..

 

Question 21:03 – General question about indexing speed and when Google crawls and indexes websites.

Answer 21:19 - I think it goes into the direction of general crawl rate and indexing speed of a website. So, I think there are two sides here that that are always relevant with regards to a new website or any website. In general, when it comes to the crawl speed. On the one hand we try to limit the speed to avoid causing any problems on a website. So, if we crawl too fast then we might slow your server down which doesn't help you either or you might even cause problems on your server. That's something that you can also limit with the crawl rate setting in search console so the crawl rate setting there is specifically about the normal crawling that we do. It does not mean that we would crawl that much. It's just if we wanted to crawl a lot of your website we need to make sure that we're not going above those limits. The other aspect that comes into play here is kind of like we talked about in the beginning with regards to crawling and indexing in that Google doesn't index everything that it crawls and we don't crawl everything that we've seen links to. So just because we've seen a website like this maybe we've seen a sitemap file, maybe you're writing a lot of articles, doesn't necessarily mean that we would crawl and index that content that quickly. So that's something that sometimes just takes a while to kind of buildup that trust essentially with Googlebot that Googlebot knows that when something new comes out on your website it has to go and crawl and index that as quickly as possible and sometimes that takes a while. So that's something that especially if you have a new website because generating lots of content that that might be playing a role there as well. So, on the one hand we have the maximum crawling that we do more base on a technical thing and on the other hand we have the maximum crawling that we would like to do which is more based on whether or not we think it makes sense to spend a lot of time crawling and indexing content from your website. So both of those you can influence in different ways.

Question 23:08 - John I have a question if you don't mind in regards to the last two questions Java and indexing. I just pasted the URL into the chat it's a new partner we're working with in Australia and there it's relatively new site and it's built with react Java and they’re having a lot of indexing issues and I just wonder if you mind taking a quick look and seeing if it's just because it's new and in Java or whether there's a another problem with it?

Answer 24:27 - I don't know I I'd have to take a look. Maybe we have more time towards the end… I think in general with regards to sites that are purely built-in JavaScript if it's like a pure react based site that doesn't do any pre-rendering. Then the thing to keep in mind is that all of this rendering takes a little bit longer to be done. So, what can happen is we will crawl the HTML and then we'll see, oh we need to do rendering here and then we will put it in our list of things to render, and then it might take a couple of days maybe even a week or so for us actually render the content. So, if you have a lot of items on this site that that need to be crawled separately then that takes a lot of time. If you have content there that needs to be updated quickly maybe new things that are coming in old things dropping out fairly quickly then that is probably something you'd want to do more in the direction of pre-rendering or using dynamic rendering to give us the static HTML version of those pages.

Question 25:47 - I have a website with its VM in India. Not sure what VM is? Will Google crawl the site from India or from North America? Does crawling from North America taking more time than from India? Google is not indexing my new site Maps it shows results from the old website which are 301 redirected which I've removed using directory removal and search console. There is no update about the index coverage of my new sitemap it's an angular Universal site.

Answer 26:25 - So I guess there are different parts here that we can take a quick look at. We do primarily crawl websites from the US at least the IP addresses and that Googlebot uses tend to map back to the US in most of these kind of lookup tools. Obviously that's something that's kind of arbitrary and that these tools have to make some assumptions there because IP addresses are hard to map to exact locations so sometimes these tools also guessed wrong. But primarily we do crawl from IP addresses that are based in the US. That means if you're doing anything special on your website that would be different for users in India from compared to users in the US then that's something that I might be worth looking into with regards to how does Google actually indexes content. You can test that using the mobile-friendly test like we mentioned before the mobile-friendly test essentially crawls with the normal mobile Googlebot and shows the page how it would look from that point of view. So if you're doing anything different where when you check from India directly and you compare it to what Googlebot sees then just keep in mind that Googlebot will index the version that it sees. It doesn't know to crawl your website from India and as far as I know we don't Google IP addresses that map back to India.

Question 28:00 – Does this affect pagespeed at all?

Answer – 28:31 - With regards to understanding how fast the pages are we take different things into account. Including lab data where we kind of artificially measure the speed that this page

would take and also field data where we see the users that are going to your pages what are the speeds at they’re seeing. So, if your pages are pretty fast in general and users in India are seeing your pages as being pretty fast and that's pretty much a good idea so that's not something where the crawling location of Googlebot would play a role in there. I guess the other part of the question was around sitemaps and indexing and crawling I think we've talked about that a little bit. The one thing I do want to mention is the move from the old website to the new website that you mentioned there briefly. In that you redirected from the old site to the new site and then did a directory removal in search console. I would not use the removal tools for any time when you're moving a website because what happens there is the removal tools don't change indexing at all they only affect what is shown in search. So, if your old website is indexed currently and it's redirecting to your new website and you use the removal tools then essentially your old site disappears and until we've actually indexed the new site understood all of those redirects then we wouldn't be showing anything. Because you're telling us not to show the old site but we don't have a new site index yet. So, we don't have anything to show. So, if you're doing a site move or if you're moving part of your website set up the redirect and let them take their time. So, they'll be seen at some point they'll be followed at some point it's not something that you would need to force and using the removal tools would not make the site move go any faster. So if you still have those removals running then I would personally I would cancel them and is preferred to have your old URLs indexed which redirect so that users get to the new one anyway rather than not having anything shown at all on search.

Question 31:27 – Because we are using Angular will it take more time to index?

Answer 31:31 - It depends on how you set that up with angular Universal you can also make it that you have the static HTML version that can serve to search engine crawlers which we call our dynamic rendering and if you have that set up then we can crawl and index those pages just the same as any normal static HTML page.

Question 32:53 - Hi John I mean I have a question because we are facing a new unique situation for one of our projects. So, one of our clients sell a lot of person online now one of the product categories, they have lot of products under the category, and they want to create a separate website for that product category. Now they have rank for the product category. So what would be the best option 301 redirect or the canonical tag because we are trying to implement canonical tag instead of 301 redirection to hold the rank so that the ranking passes to the new website. What do you think is better?

 

Answer 33:40 – I think from a ranking point of view they're probably about the same in the end and it's something where if you're splitting a website then you can't necessarily assume that the new URLs on a different website will rank the same as the previous ones. So that's always a bit tricky if you move a whole website from one domain to another then we can move those signals fairly directly but if you're taking one website you're saying well this small part here is now a separate website. Then that means we have to reevaluate those websites individually. So we'll follow those redirects but we don't have a way of saying well this ranked like this therefore this small part of the website will rank in the same way. It's possible that it'll rank similarly maybe it'll rank a little bit better a little bit worse but and generally it's not expected that it would rank exactly the same way if you're splitting a website or if you're merging the website.

 

Question 39:55 – I have a question regarding multiple news properties on one domain. We have several formats that provide news content. Not all of them are being accepted by the publishing tools. Therefore, I was wondering what has changed?

Answer 40:22 – I don’t know. So, specifically around Google News you’d have to go through the news publisher team feedback form. There’s also a news publisher help forum to find out more. The whole set up around Google News is unique and isn’t something that we include in the normal part of search. I don’t know if you’re from Germany, your accent sounds vaguely German. We’re doing a hangout for German news publishers in a week or two. If you post your questions there, I can try and find answers for you a little bit in advance.

Question 41:40 – I have a question about Google Search Console and recipe sites. We launched a bunch of recipes there in September and some of them are ranking in Google. But in Google Search Console below the amp it says there are zero valid. But then a bunch of them are valid with warnings. And some of the warnings are missing video, even though there are no videos. Do we have to have video in order for it to be noted as valid in GSC?

Answer 42:23 – Are you still in the rich results for recipes in the search results?

Follow up 42:26 – No.

Answer 42:29 – I think that’s one of the trickier parts there that some of these properties are recommended or are things that are available at least that we would use if we had them. So, I think those that are set as warnings there, those are issues where we look at it and we say “Well, it doesn’t have a video on it. We could still show it, but it doesn’t have a video but if it did then we’d be able to show that video as well.” So, we’re trying to kind of show both the state of something is really broken which would be a clear error or something that you could do to improve this. So it’s kind of something that’s an opportunity if you want to go a little bit further than just the baseline recipe with snippets. If we’re not showing them at all and they’re just shown as warnings in search console then I would assume that it’s not a matter of the recipe markup being bad, or kind of missing information. But more a matter of our systems just not understanding the quality of the website yet. That’s something where what you can sometimes do is a site query for a website where sometimes you’ll see the rich results shown and the site query but not for normal query results. And that’s a pretty strong sign that we can understand that these markups are there. But we’re just not showing it because we’re just not sure about the quality of the website.

Follow up 44:04 – So that will take probably some time since this a completely new website.

Answer 44:09 – Oh yeah. If it’s a completely new website it takes a bit of time. If its an existing website and you’re still seeing this then that’s something where I would try to find a way to improve the quality overall. But if it’s completely new then I think that it takes a bit for our systems to understand that.

Question 44:33 – So, we were wondering if a website hasn’t been moved to mobile first indexing what is the reason? Is there any reason from Googles side say for international websites, I know not all websites have moved over but because all of our websites are moved over to mobile-first except one, is there any particular reason for that like, for example is there any geographical factors that play into it?

Answer 45:14 – We don’t have anything specifically set up for mobile first indexing with regards to location of the website or type of the website. It’s more a matter of our classifiers looking at the website and saying whether or not it’s ready for mobile first indexing. So, if it’s not moved yet it may just be that it hasn’t moved yet because we haven’t gotten through to the whole web. Or it may be that there are still some issues that you can look at in regards to mobile first indexing. Usually this falls into things like the text not being complete on the mobile version. Maybe the embedded content not being as complete. So images and videos not indexable in the same way. Maybe images are blocked by robots.txt, those kinds of things missing. Or maybe structured data where structured data might be missing on the mobile site but it’s actually there on the desktop site. There are some other criteria as well that we look at but those are kind of the three main things that we see issues with a lot for websites. If the website is responsive design then those issues don’t play a role at all. Then it’s probably just a matter of time to move things over. I suspect over the course of the next year we’ll see more and more of these sites also shifting over. We’ll probably also se some tools or messages in search console that makes it easier to understand where the remaining issues are. But for the most part if it’s not mobile first indexed then that’s perfectly fine. That’s not something unique or…

Question 47:17 – I’ve noticed while researching some publishers in the United States that certain publishers is preferred to write about other brands only when there is some sort of affiliate link that they can use to point towards their product or service. And as far as I know, Google tries it’s best to figure out these affiliate links and not count them in the whole ranking process. I’ve noticed that some affiliate networks, not sure how they do it, but it seems that the link just looks like a normal link. It’s a href and a link to a certain product or service but when you actually click on that link or on the button, it actually moved you through the affiliate and changes your URL to the affiliate network and then goes to the actual website. So, I assume that they do this because they think that Google just parses the web page and finds the links, doesn’t see affiliate links, and doesn’t actually click on the button like a normal user would. So, I was wondering if there is something that you’re doing to kind of detect and understand that there is in fact an affiliate link and you should take it into account.

Answer 48:45 – I think people have been doing this for a really long time. There are various tricks that they use to kind of swap out the links and the URLs that are linked there. As far as I know the webspam team and the quality teams, they’re aware of these techniques as well. It’s something where usually what I see happening is that the webmaster comes up with this complicated scheme to hide the affiliate links and in the end, nothing changes. You add all of this complexity and nothing changes. From my point of view, I would just use a normal nofollowed affiliate link and leave it at that. There is no reason to try and hide that more than that. I don’t think it helps the website or the linked to website in any way… There’s lots of these weird schemes. I’ve never seen one where I tell you “Oh wow! They did something really fancy and they had an advantage from that”.  Usually it’s like “Oh wow! So fancy and nothing changes”.

Question 50:40 – I have a question regarding some of the generic phrases. For example, if you search for “body building supplement guide” you get the result from bodybuilding.com and they’re all the same for the same page. So, it gets 10 blue links from the same website, bodybuilding.com and is it possible that in this case Google was confused that the brand which is bodybuilding.com and that the word which is a generic word, bodybuilding is the same thing. A website shouldn’t occupy all the terms regarding bodybuilding. And you can see that this website monopolizes a lot of the words and phrases regarding bodybuilding. Basically, you search for a site prefix and bodybuilding.com when you search for something like bodybuilding. Is google confused in that case that the brand and the generic term are the same thing and should Google impose limits? Because I believe before that if you have the website listed two or three or four times you shouldn’t list it more because I mean 9 or 10 times it’s a bit too much.

Answer 52:00 – Yeah, this comes up every now and again. I think with regards to whether or not we would mix kind of the brand or domain name and the generic term, that’s something that for the most part I see us handling fairly well. Especially when I get questions from new websites where the say they bought the domain name that is keyword1.com and keyword2.com and I’m not ranking for my brand name. And when we see this we say “well people who are searching for this, they’re not looking for your brand. They’re looking for those two keywords.” Just because your domain name has those two keywords in it doesn’t mean that we would rank it any higher than any other website. So, I assume a lot of the ranking with regards to that generic term, bodybuilding for that website is also due to that website being around for a long time and being well known for these topics. So, it makes sense to show it in search.

Follow up 53:05 – Yes, but the guide is not super high quality so it shouldn’t take ten kind of links one after the other. Maybe give them 2 or 3, but not 10.

Answer 53:15 – That’s a different question. I think that’s something that we also get feedback on from time to time. It’s like you’re showing too many of the same results. And I know there are teams at Google that are working on this and trying to find the right balance. It’s definitely not the case that we would say that 2 results are the maximum per website. Sometimes it makes sense to show lots of results from the same website. I don’t know that if in this particular query it made sense, there are other things that doesn’t highlight it more. But yeah, in general it’s something where we go to the quality engineers and they show us different feedback and they say “people are looking for this website based on this query, based on what we’re seeing” some make sense or we go to them and say oh yeah, this is a mistake on our side. We’ll take this as a data point. And I know that the teams that work on quality and ranking, they constantly revisit this question of how many results we should be showing from the same website. That’s something that they look at all the time because they sometimes say “I’ll add more, sometimes it’ll go back a little bit”. It’s really tricky to find that right balance.

Question 54:47 – This is regarding working on a Malaysian website. When I’m searching for certain keywords like “best credit card for students”. So out of 10 results I get 9 results from U.S websites. So is this something that Google works on or you think this is something again to generate query to have so someone looking for it in Malaysia for, let’s say, credit cards. They might want to have those things but they mostly get the results of the U.S. websites. So, how can you tackle all the situations like this, you want to do that but those you like because of that they have a huge authority over us that’s enclosed. So, they tend to kind of outrank all of us in the Malaysian market so how we should we react?

Answer 55:36 – Yeah, having a strong competitor is always hard. I don’t really have an answer that kind of lets you jump over someone who has already built up a very strong presence online in those niches. That’s something where you basically just have to keep working at it or another approach that a lot of sites take is to say “well, this particular query is very competitive and it would very hard for us to rank very high in these search results for this query. Maybe we should focus on a variation that is a little bit more specific or that is a unique twist of this query that we think people will want to search for in the future that currently isn’t being covered by this strong competitor”. So those are kind of the different directions that you can take there. And regarding which one you actually end up using, that’s kind of more of a strategic question on your side rather than just a technical SEO question that I can help answer.

Follow up 56:47 – The thing is, when we use the variations over here then yes, we tend to rank better. But when your keywords are more generic like not using the country or any other things, we don’t see ourselves ranking high.

Answer 57:25 – I think that’s kind of the normal struggle with SEO. Trying to jump above your competitor. I don’t really have the magic answer otherwise your competitor would just take that and jump over you again. It’s hard to say.