This week John had some great insight on the Quality Raters Guide, which we are big fans of, when it comes to a the trustworthiness of a website. Below are the hand picked questions and answers we thought were the most helpful for SEOs. Full video and transcription below!

How does Google interpret a page when h tags are used instead of <p>?

4:34

John Mueller Help Hangout February 8 2019

I don't see a big problem with that I mean obviously like you since you notice it it's probably something that would make sense to clean up but it's not that we would say this is is a negative effect but rather what you're doing there by saying everything is important you're telling us it's all the same importance. So it's like everything is important therefore nothing is particularly important. So we we have trouble kind of understanding the context within the page. So that's that's why mostly I would clean that up. We can figure out which parts are really important and which parts are kind of in normal text and that way we can understand these pages a little bit better. I don't know if you would see a direct ranking change because of fixing that but it makes it easier for us to figure out what the page is really about. I don't think this would be something that we would see a spam or something that would see is problematic it's really just you're you're not giving us as much information as you could be giving us by telling us this is really important and this is kind of normal content

 


Summary: This probably isn’t a big deal. However, John confirmed that Google can use h tags to determine which content on the page is important and what the page is about.


Why does Google sometimes not honor the canonical on syndicated content?

8:56

John Mueller Help Hangout February 8 2019

I think this is always a tricky situation we do try to figure out which page is the most relevant for some of these queries and to point users directly there but if these are completely separate web sites and they're just posting the same article then there's also a lot of kind of additional value from the rest of the website and that could be information on that specific page it could be kind of additional value that the rest of the website brings where when someone goes to that one article maybe they go off and look at other things on that website because otherwise that was also very nice. So that's something that can always happen and if you're syndicating content that's something you kind of need to take into account that it might happen that the content that you syndicated to some other website ends up ranking above your content that's not not always completely avoidable. So those are kind of trade-offs that you have to look at there I think the canonical is a good way to let us know that these two pages belong together but it's also the case that a canonical isn't really correct in any case like this because the pages themselves might be completely different. It might be that there's this block of text that's the same across both of these pages but it might be that there's a lot of other content around that page that is completely different that could be user comments, that could be kind of be the rest of the website itself. So again that's kind of a trade-off that you have to look into it makes sense to bring the information out to a broader audience by syndicating the content but on the other hand you have to take into account that maybe these other websites will rank above your website when it comes to search for that specific piece of content.

 


Summary: If you are syndicating content, Google may choose to index the content on the republishing sites rather than your own. If there is a lot of other content (such as comments) on one of these pages, then Google may not respect the canonical.


Does Google always recognize structured data markup?

26:58

John Mueller Help Hangout February 8 2019

So I don't know what you mean with structured data organization but in general we have algorithms that try to figure out when it makes sense to show structured data as rich results in the search results and when we feel that maybe it doesn't make sense or when we feel that maybe we're not hundred percent sure about this website or about the way that the structured data is implemented on this website and we'll be a little bit more cautious. So that that is something where if you provide valid structured data markup it's not a guarantee that it'll always be shown exactly like that in the search results.

 

 


Summary: If structured data is not producing rich results for you (like stars in the search results), it could be that it is not implemented properly. But, it could also be because Google’s algorithms decided not to show it.


Does Google always recognize structured data markup?

53:46

John Mueller Help Hangout February 8 2019

I think that's that's always a bit tricky there usually are two aspects that are involved there. One is more an issue especially with newer websites where the website name or the company name is more like a generic query. So if for example the the website's name is “ Best Lawyers in Pittsburgh” or something like that. Then on the one hand that might be the company name, on the other hand if someone were to type that into search, we would probably assume that they're not looking for that specific company but rather they're looking for information for that query. So that's especially with newer companies that's something that we see every now and then.  We see that the forums or people there saying, oh I'm not ranking for my domain name, and then their domain name is something like I don't know best VPN providers.com it's like well, it's a domain name but it doesn't mean that you will rank for for that query. So that's one thing when it comes to sites that are a little bit more established that are out there already usually it's more a sign that we just don't really trust that website as much anymore. So that's something where we might recognize that actually people are searching for this company but we feel maybe the company website itself is not the most relevant result here, maybe we feel that there is kind of auxiliary information about that company which is more important that users see first where which could result in something like this happening. Usually that that's more a matter of things kind of shuffling around on the first one or two pages in the search results. It would be really rare that that's something like that would result in a website not showing up at all the first couple pages of the search results. I think that's that's kind of what you highlighted there with your question and that's something where I think well, even if we didn't trust this website as much anymore then we should at least have it somewhere in the search results because if we can tell that someone is explicitly looking for that website it would be a disservice for the user to not show it at all. Like for maybe for generic queries one could argue maybe it's not the perfect result but if we can tell that they're really looking for that website at least we should give the user a chance to see that website as well. So that's kind of why I took that and said well maybe we should be catching this a little bit better and I I don't know if our algorithms are correctly kind of understanding how trustworthy your website there would be. I don't know the website so that's really hard for me to judge but even if we think that it wasn't trustworthy at all maybe we should still show it somewhere in the search results on the first page. The other thing to maybe look at if you haven't done so already is look at look at the quality rater guidelines that we have and there's a lot of information about kind of trustworthiness in there. Some of that is worth taking with a grain of salt it's not that we take that kind of one-to-one and use that as a ranking factor but there there are a lot of ideas in there especially when you're talking about a topic that's a kind of legal website or medical website then it does make sense to show users like why are you providing this information why should they trust you.

 


Summary: If you’ve got a new brand that happens to have keywords in the brand name and url, this doesn’t mean that you should automatically rank for those keywords. However, if Google’s algorithms don’t trust your site, then it will not rank well. John points us to the Quality Raters’ Guidelines for more info. Our note: There is also good information here on how we think that recent algorithm updates are connected with trust.


 

 

If you like stuff like this, you'll love my newsletter!

My team and I report every week on the latest Google algorithm updates, news, and SEO tips.

 

Full Video and Transcript

 

 

Note from John 0:33 -  I just wanted to talk about one thing briefly before we move on to the questions. As you've probably seen we we did a blog post about search console, some of the changes that are coming up in search console. Last year we started moving to a new platform in search console and for us for the search console team this is something that they've been working on for quite some time and it's something that we have to get complete more-or-less towards the end of year at the latest. So our goal with the blog post with some of the changes coming up is to make sure that we inform you all as early as possible. So that when things change or go away in ways that we think might might affect you we want to let you know about that as early as possible. Change I think is is always a bit of a hassle especially when you have processes that work fairly well and someone goes off and changes the tools or changes the data that's provided in the tools it's always a bit frustrating. So some of these changes we can't avoid some of them we think we should have done way in the beginning when we started with search console knowing now if we have known then what we know now is something like the the canonical change that we announced this week. So I realize sometimes this is a bit frustrating but we hope we will have a fairly nice path going forward and we want to to really inform you early and let you try things out early so that you're not too surprised when things do change. We do also have a lot of really neat stuff lined up and by shifting to a new platform and removing some of the old features a team has a lot more time to actually move forward and create new and fancy stuff. So well you can do there if you feel strongly about certain things that are either going away or that are missing or that you'd like to see in a new tool make sure to use a feedback feature in search console and don't just go in there and say like I really want this but rather give us some information about what you'd like to see from that like what are you trying to achieve by having this this new feature or having the same thing as we had in the old one in the new one because giving us a little bit more information there helps us to figure out how do we need to prioritize this is this something that we maybe missed something that we should have thought about early. Is this something that maybe we can provide a better way to give you that information or to help you do that thing then we had in the old tool. So make sure to go into the feedback tool send us information give us feedback on what you think you'd like to see differently some of these things we'll be able to do some of them might take a little bit longer because we really need to first kind of clean out all of these old things that we've collected over the years and move everything over to the new platform. So for some of that I love a little bit of patience but it's it's also fine to let us know vocally if there's something that you feel really strongly about so don't don't be too shy.

Question 4:34 - We found one of our clients sites, the way they built the website there is no paragraph text all are h1 h2 h3 h4 tags. So these use heading 4 tag instead of P tag. Their main content of the website they use heading 4 tag for the main content of the website. Does this have any negative impact on their ranking?

Answer 4:40 - I don't see a big problem with that I mean obviously like you since you notice it it's probably something that would make sense to clean up but it's not that we would say this is is a negative effect but rather what you're doing there by saying everything is important you're telling us it's all the same importance. So it's like everything is important therefore nothing is particularly important. So we we have trouble kind of understanding the context within the page. So that's that's why mostly I would clean that up. We can figure out which parts are really important and which parts are kind of in normal text and that way we can understand these pages a little bit better. I don't know if you would see a direct ranking change because of fixing that but it makes it easier for us to figure out what the page is really about. I don't think this would be something that we would see a spam or something that would see is problematic it's really just you're you're not giving us as much information as you could be giving us by telling us this is really important and this is kind of normal content

Question 7:03 - We keep getting random broken links in search console I wonder what we should be doing there redirecting them or leaving them as they are?

Answer 7:15 -  I don't know what what random broken links you're seeing there that might be something to maybe post in the forum to get some advice as well but in general if you see a link pointing at your website that doesn't work at all then it's fine to return 404 for a URL that doesn't exist. That's kind of what the 404 status code is for and that's something that our systems work well with. So if there's a URL that never existed return 404 that's perfectly fine. On the other hand if you see that there are links coming to your website that are pointing at your elsewhere you can guess what they meant and maybe they just have a typo or an extra dot at the end or something like that then those might make sense to redirect especially when you're seeing people going through those links because that seems like something or someone try to recommend your website but they didn't get it perfectly right. So might make sense to redirect those to the correct page instead. I think for both of these situations that's something where you can also look a little bit at the traffic through those URLs if a lot of people go to those URLs then somehow that's encouraging because people are wanting to go to your pages then it might make sense to figure out a way to determine what was meant with this link and where could I point it, where could I redirect people to.

 

Question 8:56 - What factors might cause a piece of content that's been syndicated on a partner site to rank well. This is despite the fact that the canonical is set to the original content on my side and has been there for several months. Is it a matter of site or niche or authority. What could we do there?

Answer 9:19 -  I think this is always a tricky situation we do try to figure out which page is the most relevant for some of these queries and to point users directly there but if these are completely separate web sites and they're just posting the same article then there's also a lot of kind of additional value from the rest of the website and that could be information on that specific page it could be kind of additional value that the rest of the website brings where when someone goes to that one article maybe they go off and look at other things on that website because otherwise that was also very nice. So that's something that can always happen and if you're syndicating content that's something you kind of need to take into account that it might happen that the content that you syndicated to some other website ends up ranking above your content that's not not always completely avoidable. So those are kind of trade-offs that you have to look at there I think the canonical is a good way to let us know that these two pages belong together but it's also the case that a canonical isn't really correct in any case like this because the pages themselves might be completely different. It might be that there's this block of text that's the same across both of these pages but it might be that there's a lot of other content around that page that is completely different that could be user comments, that could be kind of be the rest of the website itself. So again that's kind of a trade-off that you have to look into it makes sense to bring the information out to a broader audience by syndicating the content but on the other hand you have to take into account that maybe these other websites will rank above your website when it comes to search for that specific piece of content.

 

Question 11:24 - Google is reporting our expired product pages as soft 404 for these URLs redirect to relevant alternates product with a message saying that the product they wanted is unavailable. Is the redirect causing a soft 404 or the content of the redirect page?

Answer 11:47-  I suspect what is happening here is that our algorithms are looking at these pages and they're seeing that there's maybe a banner on this page saying this product is no longer available and they assume that that applies to the page of the the user ended up on. So that's sometimes not really avoidable there. If you're really replacing one product with another it might make sense to just redirect.

Question 12:32 - How long does it take for Google to recognize Hreflang tags? Is it possible that Google first indexes from Switzerland and shows a CH version of a website under the German TLD?

Answer 13:02 -  So we we don't actually index content first from Switzerland our crawlers and our systems are more located in the US rather than in Switzerland. So I don't think we would prioritize Swiss content over other content but what happens with with hrefLang links in general is kind of a multi-step process. First we have to crawl and index those different versions of the page. Then we have to index them with the same URL that you specify with within the hreflang markup and then we need to be able to follow that hreflang markup between those different versions of the page and to do that we need to have that confirmation back as well. So it's something that does take a little bit longer than just normal crawling and indexing we have to kind of understand the net between these different pages that are all supposed to be part of this set of hreflang pages. So that's something where it's it's probably normal for us to take, I don't know, maybe two to three times longer than we would to just crawl and index an individual page so that we can understand the link between the hreflang versions. Again there is no preference for Switzerland over other countries in Europe I think that would be nice from just for for me personally from a kind of egoistic point of view but it wouldn't make sense in in the kind of global area in general. We we do try to treat all web sites the same. So just because a website has a CH version doesn't mean that would automatically rank above a German version. So the other thing with hreflang is for the most part it doesn't change rankings it just swaps out the URLs.

Question 15:12 - I recently changed my site's domain from this one to another one 301 redirects in place. Change of address has been initiated, I'm still seeing the old URLs and it's been over three weeks is that normal? I had some issues with 301s not being active for a week after the migration happened but they're active now. For some queries both the old and the new site show up in the search results. What could I be doing differently here?

Answer  15:46 - So the the 301 redirect is really what you should be watching out for. It's important for us that the 301 redirect is on a per page basis. So all of the old pages redirect to the same page on the new website. We have all of that covered in in our information in the Help Center for site moves so I would double check that and kind of go through step by step and URL by URL even to see that this is really working the way that it should be working. The other thing to keep in mind is we crawl and index URLs individually. So we don't crawl the whole website at once and then switch things over we do that step by step and some of these pages are crawled and indexed very quickly within a couple of hours, some of them take a lot longer to be re-crawled and re-indexed and that could be several months. So that's something that that could be playing a role here as well where maybe we just haven't had a chance to crawl and index and process the redirect for all of these pages so there's still some that we've only seen on the old website and some that we've already seen on the new one. That could be playing a role here too especially if you're looking at a period of three or four weeks and that would be kind of normal and finally what what also plays a little bit into this is something that SEOs and webmasters find really confusing in that even after we've processed that redirect if someone explicitly looks for the old URL we'll show them the old URL. So that's a little bit confusing our systems are trying to be helpful here and say well we know this old URL used to exist and we have the new content here but we'll show it to you because that's possibly what you're looking for. So for example if you do a site query for the old URL even maybe here after doing your site move we can still show you some of the URLs from your old website with a site query even though we've already processed the redirect for those URLs. So when you change your site name for example you'll see within the site query you'll see the old URLs with the new site name that's mentioned there and from our point of view that's that's working as intended we're trying to help the the average user who is looking for a URL. For a webmaster who just did a site move that is a bit confusing. So I don't know if that's something that we will be changing but in general I think that kind of makes sense.

Question 21:46 -  How does Google search bot view website personalization? We have a new product layer that the website allows personalization based on industry location even to single company this allows us to really kind of adjust the content.

Answer 22:12 -  I think we looked at this one last time as well but just just to kind of give a quick answer here the the important part is Googlebot mostly crawls from the US. So if you serve different content to different countries that Googlebot would probably only see the US version of the content. We would not be able to index different versions of the content for different locations. So if there's something that you want to have indexed make sure it's in the generic part of your website so that Googlebot is sure to be able to pick that up. You're welcome to use personalization to kind of add additional information all across the page but if you want something to be indexed it should be in the part of the page that is not tied to personalization.

Question 22:59 -  I'm wondering how much a very low performance rate in web.div is affecting Google ranking of a website?

Answer 23:06 -  I don't know. So web.dev is a really cool tool that pulls together the different tests that we have in lighthouse and gives you scores on those and kind of guides you through the process of improving those scores. So when it comes to speed things that you need to watch out for things that you could try and over time it tracks the progress of your website and kind of as you're going through the different content that on the tool there. So that's something that I think is generally a good practice to go through and to kind of work through but it's also something that these are good practices to follow but it doesn't mean that they will automatically result in higher ranking. Similarly if you have kind of low scores here it doesn't mean that your website is ranking terribly because it's not following the best practices. So there's kind of an aspect of on the one hand if your website is so bad that we can't index it properly at all which might be the case with like a really low SEO score in lighthouse where we can't access the URLs or there are no URLs on this page and it's just a JavaScript shell that we can't process the JavaScript for. That I could have a severe effect on your SEO but on the other hand if it's just a matter of your site being a little bit slow or not being perfectly optimized,  I don't know if that would cause a significant effect on your website. So my recommendation here is to look at the advice that's given in tools like web.dev and think about what you can implement think about the parts that you think are important for your website on the one hand for search engines of course if you're asking this here on the other hand of also for your users because ultimately if you're doing something that improves things for your users then that will have a kind of a long-term trickle-down effect on the rest of your website as well.

 

Question 26:58 -  Can Google decide whether or not to show information of the structured data organization.

Answer 27:05 - So I don't know what you mean with structured data organization but in general we have algorithms that try to figure out when it makes sense to show structured data as rich results in the search results and when we feel that maybe it doesn't make sense or when we feel that maybe we're not hundred percent sure about this website or about the way that the structured data is implemented on this website and we'll be a little bit more cautious. So that that is something where if you provide valid structured data markup it's not a guarantee that it'll always be shown exactly like that in the search results.

 

Question 28:31 -  Question regarding the website structure and multilingual multi-regional configuration should I worry about the URL parameters configuration when splitting the domains by folders to separate services in search console.

Answer 28:52 - So I I think on the one hand I think it's good that you're looking into these these kind of issues on the other hand I am kind of worried that you would have different configurations for a website by subdirectory because that sounds like maybe you're not doing something that clean with the URL configural parameters in general across the website. So that's something where in, I don't know know the website here specifically so it's really hard to say, but it sounds like you have different parameters that mean different things or that can be ignored or that shouldn't be ignored depending on the individual kind of subdirectory within your website. On the one hand that that should be possible we should be able to deal with that on the other hand if there are situations where we can completely ignore individual URL parameters and in other cases where these exact same parameters are critical for the content then that feels like something where our algorithms could get confused and say, well we always need to keep these parameters or we never need to keep these parameters, and then suddenly parts of your content is missing or parts of your content is indexed multiple times. So using the URL parameter tool definitely helps us in a case like this but it feels to me like it would probably make more sense to try to clean up these URL parameters in general and find a way to have a consistent structure for your websites URLs so that algorithms don't have to guess, that algorithms don't have to figure out, Oh in this particular path this parameter is important and in these other paths we can ignore it. Anytime that you're adding so much additional complexity to the crawling and indexing of the website you kind of run into the situation where maybe things will go wrong. So the easier you can keep it, the cleaner you can keep it, the more the simpler and you can keep it within your URL structure the more likely we'll be able to crawl and index that site without having to think twice and as always there are other search engines out there they don't have the URL configuration tool. That the data that we have there so they wouldn't be able to see that and you might be causing problems on those other search engines or perhaps it also plays a role with how your content is shared on social media. So all of these things kind of come into play here. My general recommendation here would be not to spend too much time trying to fine-tune the URL parameter handling tool for all of these different subdirectory but rather to take that time and to invest it into thinking about what you would like to have as a URL structure for the long run and thinking about the steps that you would need to get to that cleaner URL structure.

Question 32:17 -  I'm hoping to get some insight on an issue that I posted related to content being the index as duplicate submitted URL not selected as canonical.

Answer 33:04 - So in general what what I think is happening here is for whatever reasons our algorithms believe that these pages are equivalent and that we can fold them together and because of that we pick a canonical from one of these pages and it looks like looking at the pages manually in a browser that they're actually quite different pages so folding them together would not make sense therefore selecting a canonical from this asset would also not make sense. One of the things that I've seen in the past that have led to something like this is when we can't render the content properly. When we can't actually access the content properly, when we basically see an empty page that we say, oh well this is the same as the other empty page that we saw maybe we can fold them together. So offhand that's kind of the direction I would take there to think about how is Google thinking that these pages are equivalent is it possible that maybe in the mobile-friendly test like they don't have actual content. Could it be that I'm showing an interstitial to Googlebot accident and only that interstitial is being indexed, what what might be happening here? I didn't have a chance to look into it that much detail here yet so it might be that something like this is happening on your side, it might be that something weird is happening on our side and we need to fix that but that's kind of the direction I would take in a case like this.

Question 34:41 - I want to navigate my customers a little bit better on my website I want to make sure that this wouldn't confuse Google. I'd like to set up my URL structure to have like a domain and then category and then product in a path or maybe I set it up differently what what should I do, which which URL structure should I pick?

Answer 35:13 - From our point of view you can use any URL structure. So if you're using a kind of a path subdomain subdirectory structure that's perfectly fine it's important for us that we don't run off into infinite spaces. So if you use URL rewriting on your server that it's not the case that you can just add like the last item in your URL and just continue adding that multiple times and just it always shows the same content but it should be a clean URL structure where we can crawl from one URL to the other without getting lost in infinite spaces along the way. You can use your URL parameters if you want but if you do decide to use your URL parameters like I mentioned in one of the previous questions try to keep it within a reasonable bound. So that we don't again run off into infinite spaces where lots of URLs lead to the same content but whether or not you put the product first or the category first or you use an ID for the category or write out the category as text that's totally up to you. That doesn't have to be a line between your ecommerce site on your blog that can be completely different on both of these. So I think it's good to look at this but on the other hand I lose too much sleep over this and rather define the URL structure that works for you in the long run in particular one that you don't think you need to change in the future. So try not to get too narrow down and pick something that works for you, works for your website.

Question 37:43 - We're developing an application for angular Universal with some sections we want to change the appearance of the URLs in the browser but keep it the same on the server side. So for the server it would be luxury goods leather bags but the user would see just leather bags. Is there any problem with this in angular Universal using dynamic rendering?

Answer 38:08- So just from from a practical point of view Googlebot doesn't care what you do on the server you can track that however you want on your server. The important part for us is that we have separate URLs for separate pages that we have links that Googlebot can click on that are in kind of a elements with an href pointing to a URL that we can follow and that we can access these URLs without any history is with them. So if we take one URL from your website we can copy and paste it into an incognito browser and it should be able to load that content and if it loads that content if you have proper links between those pages then from our point of view how you handle that on your server is totally up to you. Using angular Universal with dynamic rendering or if you have something of your own that you set up that's all totally up to you that's not something that we would care about. It's not something we would even see because we see the HTML that you serve us and the URLs that you serve us.

Question 39:18 -  My website fetches individual web pages which are not interlinked through an API but no links are displayed through clicks. It has a search box where every individual page shows search results is Google is successfully crawling all links as valid via sitemap does Google see this is a valid practice because links and millions will harm rankings or increase ranking.

Question 39:48 - So there are lots of aspects in this question where I say this sounds kind of iffy there is some things that sound kind of ok. So if Google is already indexing these pages then something is working out right. In general I I'd be careful to avoid setting up a situation where normal website navigation doesn't work. So we should be able to crawl from one URL to any other URL on your website just by following the links on the page. If that's not possible then we lose a lot of context. So if we're only seeing these URLs through your sitemap file then we don't really know how these URLs are related to each other and it makes it really hard for us to be able to understand how relevant is this piece of content in the context of your website, in the context of the whole web. So that's that's one thing to kind of watch out for and the other thing to watch out for. The other thing too watch out for I think is if you're talking about millions of pages that you're generating through an API with a search box and just so many of those via sitemap files. I may be kind of cautious there with regards to the quality of the content that you're providing there. So in particular if you have like product feeds if you're using RSS feeds to generate these pages, if you're doing anything to automatically pull content from other websites or from other sources and just kind of republishing that on your site then that's something where I could imagine our quality algorithms maybe not being so happy with that. Similarly if this is all really completely republished from other web sites I could imagine the web spam team taking a look at that as well and saying well why should we even index any of this content because we already have all of the content indexed from from the original sources. Like what is the value that your website is providing that the rest of the web is not providing. So that's one thing to kind of watch out for. I don't want to kind of suggest that your website is spammy I haven't seen your website but it is something that we do see a lot and it's something where as a developer you go, oh I have all of these sources and I can create code therefore I can combine all of these sources and create HTML pages and now have a really large website without doing a lot of work, and that's really tempting and lots of people do that, lots of people also buy frameworks that do this for you but it doesn't mean that you're creating a good website. It doesn't mean that you're creating something that Google will look at and say oh this is just what we've been waiting for we will index it and ranked it number one for all of these queries. So it might mean that it looks like very little work in the beginning because you could just combine all of these things but in the end you spend all this time working on your website when actually you're not providing anything of value then you end up starting over again and trying to create something new again. So it looks tempting to save a lot of work in the beginning but in the long run you basically lose that time. So it might make more sense to figure out how can you provide significant value of your own on your website in a way that isn't available from other sites.

Question 43:34 - We're facing an issue where lots of resources couldn't load through the to the same page not getting rendered in the snapshot for Googlebot while debugging these issues we couldn't find a solution and Google is marking them as other error. What could that be?

Answer 43:51 - So this is also a fairly common question what is essentially happening here is we're making a trade-off between a testing tool and the actual indexing and within the testing tool we try to get information as quickly as possible directly from your server but at the same time we also want to give you an answer fairly reasonably quickly so that you can see what is happening. What what tends to happen here is if you have a lot of resources on your pages that need to be loaded in order for your page to load then it could happen that our systems essentially timeout and we try to fetch all of these embedded resources but we don't have enough time because we want to provide an answer to you as quickly as possible. So you end up seeing these embedded resources not being pulled and you see an error in the live rendering of the page like that. When it comes to indexing our systems are quite a bit more complex though we cache a lot of these resources. So if we try to index an HTML page we'll know all of these CSS files we've seen before we can just pull them out of our cache we don't have to fetch them again we can render those pages normally and that just works. One thing you can do or maybe they're two you can do to to kind of help improve that for the testing tool and for users in general. On the one hand you can reduce the number of embedded resources that are required on your pages. So instead of having a hundred CSS files you've you kind of throw them into the tool you create one CSS file out of that that's one thing that you can do that makes sense for both users and for search engines. You can do that for JavaScript as well you can minify the JavaScript you can combine things and kind of make packages rather than individual files I think that's a good approach. The other thing is if you're seeing this happening for your pages and you don't have a lot of embedded content then that's kind of a hint that your server is a bit slow and that we can't kind of fetch enough content from your server to actually make this work. So that might be a chance to look at your server your network connectivity and to think about what you can do to make that a little bit faster so that these tools don't time out so that it's also faster for users as well. So in both of these cases the net effect is that users will mostly be seeing the speed improvement but the side effect will also be that you'll be able to use these tools a little bit better because they tend not to time out as much.

So what what I would do there is try to use some other tools to figure out is this really a problem on your side somehow that things are a little bit slow or is this something just on Google side that we we tend not to have as much time to fetch all of these individual resources so what you could do is use the the chrome developer tools what is it the network tab that you have there and to figure out like how many of these resources are being loaded how long does it take you can use webpagetest.org it also creates a kind of a waterfall diagram for your content also listing the the time that it takes for those test URLs and the size of the resources that were to return and by using those two you can kind of figure out is is it the case that it just takes 20 seconds to load my page with all of the embedded content with all of the high resolution images or is it the case that these testing tools say my page loads in 3 or 4 seconds with all of the embedded content therefore it's probably more an issue on Google side I don't have to worry about it.

Question 53:46 - I've noticed as a result of the last several updates that have been coined the the medic update. I've seen some websites that no longer show on the first page of search results for their own company, for their own brand, and I was wondering why in general what would that be?

Answer 54:35 - I think that's that's always a bit tricky there usually are two aspects that are involved there. One is more an issue especially with newer websites where the website name or the company name is more like a generic query. So if for example the the website's name is “ Best Lawyers in Pittsburgh” or something like that. Then on the one hand that might be the company name, on the other hand if someone were to type that into search, we would probably assume that they're not looking for that specific company but rather they're looking for information for that query. So that's especially with newer companies that's something that we see every now and then.  We see that the forums or people there saying, oh I'm not ranking for my domain name, and then their domain name is something like I don't know best VPN providers.com it's like well, it's a domain name but it doesn't mean that you will rank for for that query. So that's one thing when it comes to sites that are a little bit more established that are out there already usually it's more a sign that we just don't really trust that website as much anymore. So that's something where we might recognize that actually people are searching for this company but we feel maybe the company website itself is not the most relevant result here, maybe we feel that there is kind of auxiliary information about that company which is more important that users see first where which could result in something like this happening. Usually that that's more a matter of things kind of shuffling around on the first one or two pages in the search results. It would be really rare that that's something like that would result in a website not showing up at all the first couple pages of the search results. I think that's that's kind of what you highlighted there with your question and that's something where I think well, even if we didn't trust this website as much anymore then we should at least have it somewhere in the search results because if we can tell that someone is explicitly looking for that website it would be a disservice for the user to not show it at all. Like for maybe for generic queries one could argue maybe it's not the perfect result but if we can tell that they're really looking for that website at least we should give the user a chance to see that website as well. So that's kind of why I took that and said well maybe we should be catching this a little bit better and I I don't know if our algorithms are correctly kind of understanding how trustworthy your website there would be. I don't know the website so that's really hard for me to judge but even if we think that it wasn't trustworthy at all maybe we should still show it somewhere in the search results on the first page. The other thing to maybe look at if you haven't done so already is look at look at the quality rater guidelines that we have and there's a lot of information about kind of trustworthiness in there. Some of that is worth taking with a grain of salt it's not that we take that kind of one-to-one and use that as a ranking factor but there there are a lot of ideas in there especially when you're talking about a topic that's a kind of legal website or medical website then it does make sense to show users like why are you providing this information why should they trust you.