This week John answers some great questions regarding Page Speed, Anchor Text, Java Script. As always the full video and transcript can be found below!

How is translated content treated?

0:33

I think it is fine to have a combined website like that. I think for users it makes sense to make it so that it stays easy for them to read, so that if you’re an English speaker and you go to your website it is not like a mix of Russian, Ukrainian, and English content, but rather this is all of the English content. It might not be all of the content that you have in different languages, but that its fine. From an SEO point of view, it does make sense to turn translations into high quality content. So not just using Google translate for that. The translation tools are still getting better and better for that, but it is still if you translate it by hand or take the google translate version and you clean that up and make it more readable then that is better. That is something that users notice and that is also something that we notice from an algorithmic point of view. If we can tell that this is really high quality content then we will treat it better in the search results.

 


Summary: Be sure that your translated content is readable in other languages and not just auto translated with Google Translate. Also make sure that the translations are high quality and have value for users.


If my site is running on Javascript and indexing time is critical for content, how can I tell it’s going to take to be indexed?

3:10

So in general, the first part is is correct. It is the case that we try to index content as quickly as possible, which we can if we have a static HTML version, and then the next step is we try to render the page like a browser would and we pick that content as well and use it for indexing. Those two things combined generally work together, but it is not the case that the static HTML version would be delayed (artificially) until the javascript version is ready.  So from that point of view, for most sites it is not critical that there is this difference and we don’t have any explicit time that applies to the time that it takes to start to render a page. That can differ depending on the type of page, when we found it, how we found it, what is happening around that page. For example if we think that this is something that is really quickly to show in search really quickly, then we will try and render that immediately. So it is kind of hard to take into account. There is no fixed number there. In general I would use this as a rough guideline to determine if you need to do something with client side javascript content. For example, if you have news content that needs to be indexed quickly, then I would make sure that Google can pick that content up as quickly as possible without needing to render that content separately. For for news sites, especially on the over pages on news sites where you link to all of the new articles, I would really make sure that those pages work well purely with the static HTML served to search engines. So that is kind of how I would think about it, think about how critical it is that my content is indexed immediately and not in terms of how many minutes does this take because there is no fixed time for how long it can take.

 


Summary: For javascript sites Google Firsts indexes the page and then needs time to process the javascript. There is no fixed time as to how long it takes. So if your content is in need of frequently updating it is better to serve a static HTML version of the page.


What are the main differences to the url inspection tool and mobile-friendly test, if they both pick up pages from the live server?

9:16

So the idea with the mobile-friendly test is just to test for whether this version is mobile friend so that is kind of the primary focus. And the url inspection tool, the live testing tool, is meant more for to see “how would this page do in indexing?”, It checks things, I think, like the no index response code, kind of the usual things that would apply for whether we take this page and put it in our index or not. The mobile-friendly test is mostly just focused on the mobile side and the inspect url tool is like this big pocket knife with different features that you can use for different things.

 


Summary: The mobile-friendly test is to see how well the site would do on mobile whereas the url inspection tool checks to see if indexing is working as it should be, like no index response codes.


I am thinking of splitting up my product pages on my eCommerce site. Currently they are configurable on one page with multiple variations shown. What are your recommendations?

18:00

I think the aspect I would look at more is whether it really makes sense to split those products into separate pages because what you are kind of trading is one product page that is fairly strong for that product and all of the variation, vs multiple pages that kind of have to work on their own and be supported on their own. So instead of having one really strong page for “running shoes” you have multiple pages that have to battle it out on their for like “blue running shoes” “red running shoes” “green running shoes”. So if someone is searching for “running shoes” then these small pages are really kind of not as strong as that one product page that you have for that main product. So my general advice there is to say, if you think that these variations are just attributes of the main products, in that people tend to search for the main content and then say “oh which colour do I want, it’s like I found the product I want, but I just have to pick the colour I want.” then I would put them on a shared page. Whereas if people are explicitly looking for that variation and that variation is really unique and kind of stands out on its own and people don’t come to your site just saying “I want running shoes” but rather “I want this specific kind of running shoes in this colour” then that is something maybe worth pulling out as a separate product.

 


Summary: Unless the product variations stand out on their own and people would be searching for that specific variation than it’s better to keep them all on one strong page. That way the different variations of the product pages aren’t competing with each other.


Is speed important for the mobile version of your site? 

21:00

So the good part is we have lots of ranking factors, so you don’t always have to  do everything perfect. But this also means that you run across situations like this where you say “Google says speed in important, but top site here are not so fast, so it must not be important”. For us it is definitely important but that doesn’t mean it overrides everything else. You could imagine the fastest page that you could think of is probably an empty page. But an empty page would be a really terrible search result if you were searching for something really specific. Its really fast but there's no content there so the user wouldn’t be happy. So we have to balance all of these factors, the content, the links and all of these signals and try and figure out how to do the ranking based on this mix of different factors that we have.  And it makes changes over time as well, it can change fairly quickly. For example if something is really newsworthy at the moment then we may choose to show slightly different sites the something that is more of a research, evergreen topic.

 


Summary: Speed is just one of the ranking factors Google uses. Just because top sites may not be fast doesn’t mean that it’s not important for your site.


What type of schema markup is preferable for Google, should we use JSON or micro-data, micr-formats. Which is preferable?

22:30

We currently prefer JSON-LD markup. I think most of the new types of structured data come out for JSON-LD first so that's what we prefer.

 

 

 

 


Summary: JSON-LD is preferred by Google.


How important is Anchor Text?

25:10

So I think first of all, I wouldn’t worry too much about Google patents, we patent a lot of things that don’t necessarily apply to what webs masters need to do. So I think it's interesting to see that our engineers are working on but that doesn’t necessarily mean that we will be affected by that immediately. With regards to anchor text in general, we do use it for text. It’s something that we do pick up. It’s a great way to give us context about a link. In particular within your website, if you have a link that just says “click here for more information” that's not very useful for us. Where as is you have a link that says “you can find more information on this product page” and link with that name of that product to that page then that tells us that maybe this page is really about this product. So I would certainly continue to look at your anchor text that you use, especially internally within your website and try to make sure that you’re providing anchor text that’s really useful and provides context to what is linked on the page.


Summary: Anchor Text is very important as it provides context to Google about what the page is about. Anchor text like “Click here for more information” does not provide Google with much information but Anchor Text like “You can find more information about this product page” tells google that this page is about this product page.

Our note: If you are making your own links, having too many keyword links can possibly tip off the webspam team should you get a manual review.


How does Google visually see your page?

39:00

We do try to look at the page visually but mostly with regards to, is the actual content above the fold or is the above the fold space just one giant ad. That's kind of what we focus on, also with regards to mobile friendliness we try to visual map a page and see, is this a page that would work well on a mobile device. And for that we kind of have to map out the page, its ok if some elements are unreadable as long as they work on a mobile device. If those links are there, they’re the right size and people can click on them, then that's perfectly fine. If you’re doing some fancy css transformation to turn this into 3d text, that's totally up to you. The important part is that the text itself is visible on the page and that you’re not doing too much fancy markup to split that text up. So as an example if you have a headline in the old system where you have a table based layout and you wanted to split the healing on top, I’ve seem people put individual letters into individual table cells and from our point of view that makes it pretty much impossible to see that this is actually one word because you’re using markup to split it up into disconnected chunks. From a parsing the page point of view that's really tricky.

 


Summary: Google mostly tries to see if the actual content is above the fold and not just a giant ad. In terms of Mobile Friendliness Google tries to understand if the the same links are there, everything is the right size and if people can click on those links. The most important is if the text itself is visible on the page.


Can you swap out a URL with Javascript?

43:00

Yes we can pick that up. The important part I think is that the URL needs to be swapped out after the page is loaded. It shouldn't be swapped out when a user does a specific action. So for example if a user hovers over a link and then you use JavaScript to swap out the URL that wouldn't be something that we would notice or if a user clicks on a link and then you use JavaScript to swap out the URL then that also wouldn't be something that we would notice. But if the page loads and then you execute some JavaScript that cleans up the URL so that they link to the proper canonical versions that's perfectly fine and kind of like like we talked about in the beginning when it comes to rendering sometimes this takes a bit of time. So it's not an immediate thing that we would pick up we might or it's likely that we would pick up both of these versions both the original link that you had there as well as the JavaScript version. So it wouldn't be that the old versions would drop out completely.

 


Summary: Google can pickup if a URL was swapped out after a page has loaded. The only thing to be aware of is to make sure the URL is not swapped out when a user does a specific action.


 

 

If you like stuff like this, you'll love my newsletter!

My team and I report every week on the latest Google algorithm updates, news, and SEO tips.

 

 

Question 0:33 - Question about translation. I know that if I translate English content to Russian or Ukrainian, and most people only use Google translation and just put in the content. But it is like 90% of it needs correction if you read it as a native speaker. I am interested in two points. If I want to make some other version or other language to my website, like English, Russian or Ukrainian is this ok? Second one, it I find an interesting article about my niche and I want to translate it, is this good or not? Do I need to make it adaptive for domestic readers?

Answer 1:38 - I think it is fine to have a combined website like that. I think for users it makes sense to make it so that it stays easy for them to read, so that if you’re an English speaker and you go to your website it is not like a mix of Russian, Ukrainian, and English content, but rather this is all of the English content. It might not be all of the content that you have in different languages, but that its fine. From an SEO point of view, it does make sense to turn translations into high quality content. So not just using Google translate for that. The translation tools are still getting better and better for that, but it is still if you translate it by hand or take the google translate version and you clean that up and make it more readable then that is better. That is something that users notice and that is also something that we notice from an algorithmic point of view. If we can tell that this is really high quality content then we will treat it better in the search results.

Question 3:10 - Google crawls and indexes content in two steps. First is the server side rendering and second is the client side rendering, according to previous statements it may take days or weeks for this process to be complete. For sites using Javascript they can experience serious problems if indexing is time critical. How can I tell how long it is going to take?

Answer 3:46 - So in general, the first part is is correct. It is the case that we try to index content as quickly as possible, which we can if we have a static HTML version, and then the next step is we try to render the page like a browser would and we pick that content as well and use it for indexing. Those two things combined generally work together, but it is not the case that the static HTML version would be delayed (artificially) until the javascript version is ready.  So from that point of view, for most sites it is not critical that there is this difference and we don’t have any explicit time that applies to the time that it takes to start to render a page. That can differ depending on the type of page, when we found it, how we found it, what is happening around that page. For example if we think that this is something that is really quickly to show in search really quickly, then we will try and render that immediately. So it is kind of hard to take into account. There is no fixed number there. In general I would use this as a rough guideline to determine if you need to do something with client side javascript content. For example, if you have news content that needs to be indexed quickly, then I would make sure that Google can pick that content up as quickly as possible without needing to render that content separately. For for news sites, especially on the over pages on news sites where you link to all of the new articles, I would really make sure that those pages work well purely with the static HTML served to search engines. So that is kind of how I would think about it, think about how critical it is that my content is indexed immediately and not in terms of how many minutes does this take because there is no fixed time for how long it can take.

Question 6:17 - And now we have a giant long question about understanding kind of the flagging in search console as when we flag something as ‘duplicate Google choose a different canonical than the user” or “duplicate submitted URL and not selected as canonical issues.

[User Chimes in] It is a Javascript page. It is pre rendered, all of the content is in the static HTML and yet google is still trying to execute the page. And we are finding in this eCommerce environment that these unique product pages with unique descriptions and meaningful content are being flagged as Google as duplicate content. We assume that this is down to some sort of JavaScript rendering failure, where Googlebot keeps seeing the same error page or something like that and therefore thinks that they are duplicates - how can we understand what it is that the web rendering service ends up with that might be resulting in content looking duplicate to Google bot.

Answer 7:43 - I would have to look at some actual examples, so if you could send me some examples that would be really useful.

Question 8:00 - Is Google aware of an on-going problem?

Answer 8:00 -  No… I have heard from some sites that were complaining about this more than others, so if you send over some examples that would be useful.

Question 8:24 - If a url is flagged, is there anyway of seeing anything about the rendering that happened at the time of that analysis. Is there anyway to see the resource loading errors that occured in one that was indexed? You don’t have that UI in search  console, or at least I can’t get to it. And or is there any place where we can see what the content actually looks like at the time that the indexer and or duplicate detection finder looks at it

Answer 8:41 - Not at the moment. That is something that would make a lot of sense to have. For most sites it is not critical,. But in case like this that would be useful to have.

Question 9.16 - Next question. Mobile Friendly Test. You have referenced it a couple of times in terms of understanding how the indexer would see content. However, we are finding that there are alot of other errors labeled in the loading of resources and the rate at which those errors occur seems to vary by domain to domain. So first question, are the errors that we see in the mobile friendly test representative of the errors that the service would encounter during indexing? Or are there different resource allocations for the MF test?

Answer 10:04 - So I think you are referring specifically to embedded resources that are pulled in for the test  right? Like JS files CSS different responses that kind of thing? I think one of the aspects that is a bit complicated at the moment in the sense that we have different priorities for the mobile friendly test compared to normal Google bot. We try to pull in resources as quickly as possible from the live server and during indexing we cache a lot of the resources and just take the cached version of the page. So what you might see in the mobile friendly test is that we try to render this page as quickly as possible and we can get a lot of these resources, but for some of them we essentially time out with essentially the capacity to pull this live from the server. So that is for a large part some of the errors that you see with the mobile friendly test. I also believe that in the inspect url tool if you use a live test, we are trying to pull everything live and sometimes this is just not possible to do it live. And for indexing we have a lot more, little bit longer time, that we have available for that. So if we see that these resources are needed, we will pull them, we will cache them, and we will try to have them available for when we try to do the rendering. So that is something where we don’t need to do iot live, so if it takes a little longer to do that we will be patient and wait for all of that to come together. There is still an aspect of the time out there that is these resources are all such that we can't cache them (for example the session ID and all of the JS URLS) then that makes it really for us to keep a cached version and reuse it latter, then those are ones that we might, I dunno, for whatever reason, not be able to fetch for indexing. In short I think it makes it hard to diagnose issues like this, especially if you have a lot of embedded resources. The guidelines I generally get from the engineering team is that we should just tell people to  have fewer embedded resources and they tend not to run into this problem. That is not always that easy, so, but in general what I would do is take the mobile friendly test as a rough guide, so if it works in the MTF then you are definitely on the safe side. If you do see some other things timing out with this other error, then for the most part we can still use that for indexing.

Question 12:57 - You have touched on it I guess, tangentially, should we be aware of any hard or arbitrary timeout on rendering by the web rendering service. Is there any clarity on what content actually gets used if a page takes a long time to be rendered by by Googlebot in the rendering service. Does it just give up and use the page html content that was there originally? Do we have any clarity on how far into the rendering process you can end up?

Answer 13:45 - for the most part if something breaks or times out, we just take a snapshot then and there. So yeah… I think in your case, if you are pre-rendering the content then there shouldn’t be a problem there because the content is there. What we sometimes see with eCommerce sites or with site that are using a very templated framework is that we run into situations where we assume there is duplicate content before we actually test the URLs. So  this can happen if we, for example, see a url pattern. If we access a bunch of urls with different patterns there or different parameters for example, and we see tha all of these urls ar leading to the same content, then our system might say “ok well this parameter is not so relevant for the website after all” and we tend to drop those urls then and we’d say ‘this set of urls is probably the same as this other set of urls that we have already crawled. So in particular if you have things ewhere lets see, one example that I have seen quite a bit is if you have quite a lot of different eCommerce websites and they all sell the same products - so the whole path kind of after the product part of the urls is the same across a large number of domains - then our system will say “all of these urls are the same, they are all leading to the same product” so therefore we might as well just index one of these domains instead of all of these domains. I don’t know if this would apply in your case, so I don’t know if that is useful, but it is one of those things where our systems try to optimize for what we find on the web and we assume that other people make mistakes too on the web and we try to work around that. We see “all of these people are creating duplicates, but we don’t need to crawl all of these duplicates” so we can focus on what we think are the actual urls.

Question 16:22 - To pick up on the mobile friendly test and the live test on the url inspection tool. On mobile friendly test Googlebot tries to pick up a page from the live server, so how is it different from the url inspection tool with this feature? How is it different rendering wise or fetching the pages and the resources

Answer 17:04 - So the idea with the mobile friendly test is just to test for whether this version is mobile friend so that is kind of the primary focus. And the url inspection tool, the live testing tool, is meant more for to see “how would this page do in indexing?”, It checks things, I think, like the no index response code, kind of the usual things that would apply for whether we take this page and put it in our index or not. The mobile friendly test is mostly just focused on the mobile side and the inspect url tool is like this big pocket knife with different features that you can use for different things.

Question 18:00 - One our clients eCommerce site, some of the product are sold as configurable meaning that multiple different variations of the product are shown and managed on the same page. We are thinking of splitting those pages so that they each have their own separate product page with a brand new url. The client reviews would then be moved from the old page to the new simple ones. Could the fact that the old review have an older date than the new creation date be marked as black hat SEO?

Answer 18:37 - So, I don’t see any problem at all with the reviews, so long as you can continue to add new reviews to those product pages. I think the aspect I would look at more is whether it really makes sense to split those products into separate pages because what you are kind of trading is one product page that is fairly strong for that product and all of the variation, vs multiple pages that kind of have to work on their own and be supported on their own. So instead of having one really strong page for “running shoes” you have multiple pages that have to battle it out on their for like “blue running shoes” “red running shoes” “green running shoes”. So if someone is searching for “running shoes” then these small pages are really kind of not as strong as that one product page that you have for that main product. So my general advice there is to say, if you think that these variations are just attributes of the main products, in that people tend to search for the main content and then say “oh which colour do I want, it’s like I found the product I want, but I just have to pick the colour I want.” then I would put them on a shared page. Whereas if people are explicitly looking for that variation and that variation is really unique and kind of stands out on its own and people don’t come to your site just saying “I want running shoes” but rather “I want this specific kind of running shoes in this colour” then that is something maybe worth pulling out as a separate product. So that is the distinction I would maybe worry about - I wouldn’t worry so much about the reviews part.

Question 20:36 Has the new Search console markup schema functionality remained the same as in the old one?

Answer 20:50 I don’t know what the exact plans are for the new search console in regards to Structured data features but we do plan to support all of those structured data features. So what might happen is these features end up in search console in a slightly different way.

Question 21:00 What about speed for the mobile version, is it crucial to have speed in the green zone and if yes, why are a lot of the top sites still so slow?

Answer 21:20: So the good part is we have lots of ranking factors, so you don’t always have to  do everything perfect. But this also means that you run across situations like this where you say “Google says speed in important, but top site here are not so fast, so it must not be important”. For us it is definitely important but that doesn’t mean it overrides everything else. You could imagine the fastest page that you could think of is probably an empty page. But an empty page would be a really terrible search result if you were searching for something really specific. Its really fast but there's no content there so the user wouldn’t be happy. So we have to balance all of these factors, the content, the links and all of these signals and try and figure out how to do the ranking based on this mix of different factors that we have.  And it makes changes over time as well, it can change fairly quickly. For example if something is really newsworthy at the moment then we may choose to show slightly different sites the something that is more of a research, evergreen topic.

Question 22:30 What type of schema markup is preferable for Google, should we use JSON or micro-data, micr-formats. Which is preferable?

Answer 22:48 We currently prefer JSON-LD markup. I think most of the new types of structured data come out for JSON-LD first so that's what we prefer.

Question 23:00 Did Google do any heavy updates in February or March?

Answer: 23:15 I don’t know, I mean we do updates all the time. I don’t know what you would consider heavy. It probably depends on your website. If your website was strongly affected by one of these updates, you probably think its pretty heavy. If we look at the web overall, maybe it's just like normally changes as they always happen.

Question 23:24. What does thin content mean for affiliate websites?

Answer 23:34 Thin content doesn’t mean anything different for affiliate sites compared to any other websites. What we’ve seen, especially with affiliate websites, there is this tendency to just take content from a feed because it's really easy to do. You can get scripts hat do this for you fairly quickly, it's easy to do, you don’t have two to do a lot of work, it creates a lot of URLs. But of course for users and for Us it's not really that interesting because your providing the same thing as everyone else already has.  

Question 24:40 Does hidden content in tabs a problem for indexing?

Answer 24:50 In general it's not a problem for indexing. It can be a problem for users, so if there is content there that you think users really need to see in order to convert then that would be kind of problematic from your point of view. With regards to indexing we can pick up that content and we can show it, so that's less of a problem.

Question 25:10 Is anchor text still an important ranking factor in 2019? Lots of companies have made studies that they point out there is no correlation. And then there's a link to a Google patent.

Answer 25:30 So I think first of all, I wouldn’t worry too much about Google patents, we patent a lot of things that don’t necessarily apply to what webs masters need to do. So I think it's interesting to see that our engineers are working on but that doesn’t necessarily mean that we will be affected by that immediately. With regards to anchor text in general, we do use it for text. It’s something that we do pick up. It’s a great way to give us context about a link. In particular within your website, if you have a link that just says “click here for more information” that's not very useful for us. Where as is you have a link that says “you can find more information on this product page” and link with that name of that product to that page then that tells us that maybe this page is really about this product. So I would certainly continue to look at your anchor text that you use, especially internally within your website and try to make sure that you’re providing anchor text that’s really useful and provides context to what is linked on the page.

Question 27:00 Can you tell us how the DMCA process works?

Answer 27:01 I can not tell you how that works because I don’t know the details about that and it's also a   legal process and I can’t give you legal advice.

Question 27:30  How does a content platform like medium get its status as content provider? When I check the transparency report for medium the status is, check a specific URL, it's hard to provide a specific status for a site like medium that has a lot of content. We’re also a content provider, hosting supermarket catalogs and other PDF publications online, generated by users. So I guess the questions is how do we get that status?

More context from the person who asked the question. Basically the problem we’re trying to solve is, our platform allows adding outgoing links in the catalogue and if one specific Url is flagged for going to a bad site, our entire domain is at risk and we have been blacklisted before. Basically it fits the bill because we have a large number of content and all of that is user generated, so how does one go about being in this standing?

Answer 28:48 Um, I don’t know. Is it mostly in regards to the transparency report with regards to phishing or maybe malware? It sounded like originally you just want the status that's provided in the transparency report but with regards to link and the content that's provided that sounds more like it’s towards phishing or spam?

We’re trying to solve for the issue where the domain is blacklisted for phishing and spam. Under the hood we are solving that problem but the generic solution seems to be something like this because even if individual long-tail domains are blacklisted for a period of time our main commercial domain is sage, is that even a good assumption?

Answer 29:50 I don’t know how to best attack that. So I think from my point of view, there's one thing that you could do. I don’t know your website so its hard for me to say already. Make it so it's easier for us to understand which parts of our website belong together. So for example, if you have different subdomains per user then its easy for us to say, well this problem is isolated on this specific subdomain or subdirectory and then our algorithms can then focus on that on a subdomain level. Where as if all the content is within the main domain and the URL structure is a slash and then a number then its really have for our algorithms to say everything that matches this pattern is maybe phishing or spam that wasn’t caught in time. The easier you can make it for us for figure out which parts belong together, which could be by user, or could be by type of content depending on how you group the content then the easier it is for us to kind of match an action that applies just to this part of the website.

Question Continued 31:33 There is no process that your aware of that you can apply or get the status of content provider? And does it actually link to having decreased risk for whole domain blacklisting?

 

Answer 31:46 I don’t think the two side are connected, so I think that content provider status in the transparency report is something that's specific to the transparency report and wouldn’t apply to the spam handling. We do have some fold here who are working on something specific for hostess or CMS providers, which I think is kind of what you fall into here. To try to give them more information on where we see spam and to better understand the grouping of content in regards to individual websites.

Question 37:00 Is it necessary to Hreflang links to paginated pages beyond page one?

Answer 37:18 So it’s never necessary to add Hreflang links, that's kind of the first thing there. It's not like you will be penalized for having those links on all pages across your website, those links do help us better understand which pages belong together. HReflang links work on a per page basis so if links work well between the homepage version of your site and not between the product pages on your website that's perfectly fine. Use them for the URLs that you think need to have that connection for the language and country versions, you don’t need to do that for everything. The other thing, sometimes doing Hreflang links properly is really complicated, especially if your mixing things like pagination and maybe filtering then that feels like something where you’re adding so much complexity that it's unlikely you will end up with a useful result and I would just drop the hreflang links so that you don’t have extra noise in search console. That's kind of the pragmatic approach that I would take there. Use Hreflang where you see that you have problems and if you don’t have any problems in regards to localization then don’t worry about the hreflang part.

Question 39:00 Does Google determine a page is low quality by taking into account what the pages looks like visually? I have a site that has elements that get 3D rotated when a user taps on them, when I look at this page as Googlebot, it see’s these elements with the ext backwards and looks weird. Is that a problem or not?

Answer 39:20 From my point of view, that's no problem. We do try to look at the page visually but mostly with regards to, is the actual content above the fold or is the above the fold space just one giant ad. That's kind of what we focus on, also with regards to mobile friendliness we try to visual map a page and see, is this a page that would work well on a mobile device. And for that we kind of have to map out the page, its ok if some elements are unreadable as long as they work on a mobile device. If those links are there, they’re the right size and people can click on them, then that's perfectly fine. If you’re doing some fancy css transformation to turn this into 3d text, that's totally up to you. The important part is that the text itself is visible on the page and that you’re not doing too much fancy markup to split that text up. So as an example if you have a headline in the old system where you have a table based layout and you wanted to split the healing on top, I’ve seem people put individual letters into individual table cells and from our point of view that makes it pretty much impossible to see that this is actually one word because you’re using markup to split it up into disconnected chunks. From a parsing the page point of view that's really tricky.

Question 41:26 - I've heard that changing a title tag for page will drop in ranking temporarily is that true what if I have a number that has just changed on the page title?


Answer 41:47 - So it's not true that changing a title will automatically drop a page in ranking I don't think that would make sense. However if you change a title and you put new keywords in there then we obviously need to figure out like how we should rank that page based on that title. Where the title is is one of the things that we do look at. We do look at a lot of other things on a page as well a lot of other signals that are involved with ranking so just changing a title on its own should have a big effect over all but if you're adding something new there that wasn't there before and you want to rank for that new piece of thing there then obviously that does take a little bit of time. So if you're just changing numbers in the title then if people were searching for those old numbers or those new numbers that might be an effect that you would see. In practice people are not going to search for like number three or number five and expect your page to show up. I mean maybe there are exceptions but for the most part that's not going to be something that would affect your your pages ranking. So if you're changing numbers in a title over time I think that's perfectly fine if users are okay with that if that works for everyone.

Question 43:00 -  Can Google crawl hyperlinks that we've swapped out the URL with JavaScript we do this as a workaround with our client due to CMS limitations.

Answer 43:08 - Yes we can pick that up. The important part I think is that the URL needs to be swapped out after the page is loaded. It shouldn't be swapped out when a user does a specific action. So for example if a user hovers over a link and then you use JavaScript to swap out the URL that wouldn't be something that we would notice or if a user clicks on a link and then you use JavaScript to swap out the URL then that also wouldn't be something that we would notice. But if the page loads and then you execute some JavaScript that cleans up the URL so that they link to the proper canonical versions that's perfectly fine and kind of like like we talked about in the beginning when it comes to rendering sometimes this takes a bit of time. So it's not an immediate thing that we would pick up we might or it's likely that we would pick up both of these versions both the original link that you had there as well as the JavaScript version. So it wouldn't be that the old versions would drop out completely.

Question 44:20 - Does Google understand related topics for example if I create a page about pets but I don't mention cats and dogs will that make it harder for Google to rank me?

Answer 44:35 - No I don't think that would be problematic. So it would of course make it harder to rank this page if someone searches for cats or dogs but you can create a page about pets that doesn't include all of those different types and I think that's that's pretty normal. Like there's a lot of variation of content out there and some content focuses more on on this side of the topic and some focuses more on a different part of the topic that's completely normal.

Question 45:09 -  How does Google understand the quotes page given that they're technically duplicate content. Can Google tell that these are quotes pages and lots of content is also on other websites is that a bad thing or not? How does Google know?

Answer 45:30 - We do recognize when there are kind of parts of a page that are shared across other pages. So a really common situation is you have a footer on your web page that you share across a lot of pages. We can tell that this this part of text is the same as you have across other parts of your website. So what generally happens there is if someone searches for something that's in the shared piece of content we'll will try to pick the most matching page for that. If someone searches for something that's a combination of that content and something else on a page that will try to pull that best matching page. So that's the same as what would happen with these quotes pages and that if someone searches for a specific quote that you have on this page then we'll try to pick one of the many quotes pages that we have that has the same quote on it. It might be yours it might be like hundred other people, a lot of people have these quotes, and we’lll try to show that one in the search results. It's not that we would see this page as being lower quality it's just that you're competing with a lot of other sites that have the exact same quotes on it so if there is something unique that you're providing on these pages then I would make sure that that is also very visible there. So that it's easy for us to tell that well pages about this quote but also has a lot of information about other I don't know, Russian quotes or other German quotes, and we can tell this user is used to searching in Russian or German so we'll bring them to your site rather than to a generic site that has just all kinds of quotes. So the more you can bring unique value into those kind of pages more likely we'll be able to show that in the search results. But it's not necessarily something that you have to hide, we recognize these quotes we we understand that as sometimes are shared across lots of websites that's completely normal.

Question 47:40 - Suppose I started a blog the most following methodology is connecting your site map to the webmaster site from day one but what if I write 50 posts and then add a sitemap file is there any difference

Answer 47:54 -  Both of those methods work. So the sitemap file helps us to better understand new and change pages on a website. It's not a ranking factor, so you won't rank higher just because you have a sitemap file it does help us to understand which of these pages are available on a website but for the most part especially for smaller sites we can just crawl them normally as well and there's no big difference with regards to how a site is shown in search if we can crawl it normally or if we crawl it with a sitemap file. So a sitemap file is definitely not critical for larger sites if you're changing pieces of content that are sometimes a little bit lower in the site then obviously a sitemap file helps us to find those changes a lot faster but if you're just starting a blog you don't necessarily need to have a sitemap file.

Question 48:50 - We resell hotels in Greece via our website, we've developed a good friendly section for hotels however we also duplicate content and titles of those hotels as we embed YouTube videos of those hotels. Is that harmful for us?

Answer 49:10 - Well I think this kind of goes back to the other questions we had on duplicate content. Duplicate content where you really need to kind of focus on making sure that you have something unique on your website if you're really just providing the same thing across a lot of different websites then that makes it really hard for us to say well this is actually a website that we need to show the search results. So that's something where I would recommend taking a step back and thinking about what you could do to make sure that your site is really unique and compelling on its own rather than just the same thing as all of these other websites.

Question 50:00 - If a story has been redirected because it it was thin content and many years old do the negative effects of the article kind of get forwarded on with the redirection?

Answer 50:12 - No not necessarily. So especially when it comes to content we look at the content that we find on the ultimate page that we land on. So if you've removed content, if you've cleaned something up and I guess that happens automatically. if you redirect that page and the old content is no longer there and we only have the new content then that's perfectly fine. So that wouldn't be anything that would kind of be carried on.

Question 50:45 - Is it natural if search console reports mobile-friendly errors are in test subversion of a page when we've associated the mobile-friendly version of a page with the link rel alternate' act

Answer 50:57 - So usually that means we we don't have a clear understanding of which of those pages belong together. So we for for one reason or another we were indexing these pages individually rather than as a pair where we know this desktop page belongs to this mobile page. So that would kind of point at maybe a mismatch with the way that you have set up the alternate link or the rel canonical link on those pages there. So I kind of look into that as well and the other thing of course is I would also look into what it would take to move to a responsive design because especially with a the mobile first index all of these kind of issues where we have separate mobile URLs they just make everything so much unnecessarily complicated. Whereas if you can move to a responsive design or a design that just uses the same URLs for the desktop and mobile versions. Than you save yourself have so much trouble so instead of kind of following up this kind of issue maybe take the time to say, okay I should invest into a plan to move to a responsive design so that I don't have to focus on these issues any more in the future.

Question 1:01:01- Is it a good idea to add a nofollow to wikipedia link and google help articles like other external links? And how much time taken by good data highlighter to take effect in a search?

Answer 1:01: 16 -  Skay so adding nofollow to Wikipedia links. I don't think that makes much sense unless Wikipedia is paying you to place those links. So I I would add a nofollow to links that you don't want to have associated with your website. But otherwise if it's a normal link in the content and I would just look normally so unless Wikipedia is paying you for those links then I would just think normally I guess. And for the data highlighter what happens there is is kind of an algorithmic process that can take a bit of time in that it's based on the cache pages it learns from the index pages on your website and based on obviously the markup that you do the data highlighter and then it takes that and applies it to new pages as we recrawl and reindex them across your website. So that's something that takes a period of time it applies to the content as we recrawl and reindex it on your website. So there's no fixed timeline sometimes that's fairly quick for a lot of pages and sometimes it can take a couple of months to be visible. So there's no kind of instant on button for that, it takes time like any other structured data that you would add to your pages manually.  

Question 1:02:58 - Is Google aware and announcing a substantial change in the way duplicate content started to be treated around the end of November start of December last year? Or has there been a substantial change in the way the web rendering service goes about its business or the relationship between web rendering and indexing versus crawled indexing around that period because nothing changed in our system and we've seen as I said substantial issues starting then in particular not just for ourselves but we've also noticed a number of other observations of this this class of problem around the same period?

Answer - 1:03:47 - Where anything substantial changed there. The only thing that that I think kind of happened around fall or so is that we started adding this feature to search console to kind of highlight those problems and that's I think also where I started seeing more of these reports in that once we highlighted it to users that, hey were we're dropping these URLs and like there's a graph of them because we think they're all duplicate, and of course everyone's like, oh this is a new problem, but for a large part it's basically always been like that we we just never talked about it in search console. So that's I don't know exactly when we started rolling that feature out in search console but probably like second half of last year something around then.