March 19, 2019 – Google Help Hangout Notes

In this Webmaster Help Hangout with John Mueller answered some great questions. Here are the ones we thought would be the most helpful.

I am currently rebranding and migrating a segment of our site to a new Domain. They are currently ranking well. Will Google treat this content the same as before or will they be treated differently?

0:34

John Mueller March 19 2019 Help Hangout

You can definitely do it on a new domain as well. In general, what will happen if you split your website site up into separate parts is that we will have to look at those parts individually. So we would have to reevaluate those pages on a kind of per page pages. So, you can definitely rank like that, but it is not the case that you could say that you will rank exactly the same as before because it will be a different situation – you will have a different website setup.

Summary: The content is in a different context and will be re-evaluated on a per page basis. You may rank well again but it’s possible that you may not rank exactly as you did in the past.

Does the March 12th Core Algorithm update have to do anything with the August 1st update?

20:20

John Mueller March 19 2019 Help Hangout

I don’t know how this would relate to the updates in August. So I mean in general when we make algorithm updates we do try and work with one state and work towards a new state and sometimes we improve things where we recognize maybe the algorithm went a little bit too far and sometimes we improve things where we recognize the algorithm didn’t go far enough. So these kinds of algorithmic changes I think they’re completely normal with just algorithm updates in general.

Summary: These kinds of Algorithmic Changes are completely normal. Sometimes Google tries to tweak things to improve the search results. But these two are not related.

We started ranking for unrelated terms and pharmaceutical drugs. We believe we’ve been hacked. Is there any way I can check to see if this is true?

27:22

John Mueller March 19 2019 Help Hangout

So my suspicion is that maybe you were hacked after all, so I don’t know your website but usually this is not something that just randomly happens that a website starts ranking for pharmaceutical terms but rather maybe it was hacked in a way that is not completely obvious from yourself. So what you could do is go into Search Console and in the search analytics section, check out the pages that were ranking for these and use the inspect URL tool, the live fetch option there to check out what this page looks like when Googlebot looks at those pages and sometimes we can see in the visual part sometimes in the HTML you can see that actually someone added a bunch of pharmaceutical links to those pages and that’s kind of a sign that maybe there is something that was hacked on these pages. Sometimes it’s also something that just kind of lingers a little bit when a website was hacked in the past and you cleaned up the hack. So those are kind of the directions I would look there, I would not assume that a normal website just suddenly ranks for pharmaceutical terms without any reason at all. So usually there is something behind it.

Summary: In Search Console under Search Analytics Section, you can see what pages are ranking and if you use the inspect URL tool you can check the HTML and see if there are Links that have been added, it could be a sign of those pages being hacked. It is possible for these to linger after it’s been cleaned up.

When disavowing backlinks asides from disavowing the spammy backlink itself, do we need to disavow all other URLs it is referred from?

30:45

John Mueller March 19 2019 Help Hangout

No, so if you’re a problematic link, then that link won’t be taken into account. If you think the whole website, all of the to your website from someone else’s website are problematic then you disavow the whole domain. That’s the easiest way to cover your bases there but you don’t need to follow back the chain to all other links pointing to that page with the bad link. It’s really often just that one link that you need to disavow. And oftentimes you don’t really need to disavow things, if it’s not the case that you have a manual action or you look at your links and say, well Google is going to give me a manual action next week because it looks so bad, then usually you don’t need to use the disavow tool at all.

Summary: You only need to worry about disavowing unnatural links from sites that link to you.

We have a website that dynamically creates pages. If a user is searching for Programmers in Oxford we have a database that creates a page that pulls information about Programmers and pulls information about Oxford and puts them together. Do you have any recommendations on a site built like this?

38:56

John Mueller March 19 2019 Help Hangout

I would be kind of worried about the setup that you have there. So that sounds a lot like you’re just taking a lot of chunks from the database and just automatically generating combinations of those pages and it feels to me that a lot of these variations could be very thin,  very low quality and that maybe you don’t have a lot of content for those variations and you quickly run into the situation where you create thousands of pages with all these combinations and the actual amount of content there is very minimal. So what could happen on Google side is, we go off and crawl and index all of these pages and we think oh like there’s so much stuff here we can kind of pick up all of this stuff but in a second step when we look at the details were like, actually there’s not a lot of really useful information here and especially if you’re also taking the jobs maybe from existing feeds from other websites. Then what we see is essentially just a mix of like some some city information, some role information,  job listings which we’ve already captured somewhere else and we kind of look at that overall and say like, what’s the additional value of having your page indexed in addition to the things we already have indexed about all of these combinations and that’s I think the tricky part there and that’s kind of where you need to jump in and say, actually the might the way I have things combined are completely different from everything else. There’s a lot of value that I add through these pages that when people go to these pages they clearly see that this is not just like a combination of existing feeds but actually there’s something really fantastic here that nobody else has captured so far.

Summary: Be careful if your site is built this way. When Google goes to index these pages they end up picking up little bits of your information. For this example, Google would see a little information about a city, and a job description. Then would see this across your entire site. This could possibly be seen as thin content and may choose not to index it because it has no additional value to users

 

If you like stuff like this, you’ll love my newsletter!

My team and I report every week on the latest Google algorithm updates, news, and SEO tips.

 

 

Q  0:34 Question about rebranding and moving site. Yes so do you want em to ask the same question again? So the question is – we want to migrate a segment, or portion, of our website. Say, 100-102 pages, to a new domain for rebranding purposes. Right now we are ranking well for these segments, but are well, will we be able to rank well for these segments on a new domain, or is it treated as an independent or separate business/website as Google.

 

A 1:09 You can definitely do it on a new domain as well. In general what will happen if you split your website site up into separate parts is that we will have to look at those parts individually. So we would have to reevaluate those pages on a kind of per page pages. So, you can definitely rank like that, but it is not the case that you could say that you will rank exactly the same as before because it will be a different situation – you will have a different website setup.

 

Q 1:54 Previously when we used GSC, there was the option of “landing page”. When we pulled a url on that landing page, then we could see which keyword they were ranking. Is there any alternative to this now?

 

A 2:14 So kind of seeing which keywords rank for which queries? That is should be in the search analytics section. That should be no problem.

 

Q 2:35 In the exact view for that previous [way] we could see which landing page in search console.

 

A 2:45 So seeing which landing page would be shown?

 

Q 2:52 Erm, can you let me find out that?

 

A 2:52 Yeah, because essentially in the new GSC the search analytics feature is very similar to the search analytics feature in the old search console. So you should be able to see the landing pages, the clicks and the queries and impressions, position….

 

Q 3:18 Yes, but now I can only see the information for my websites that I had submitted to Google search console. Whereas before I could see information for any websites keywords for which that website was ranking.

 

A 3:23 Nope, nope – that wasn’t in search console so far as I am aware. So maybe you were looking at?…

 

Q 3:32 No, because I still have found one account that I have for which that still works. I will show you that

 

A 3:37 Ok, because you really need to have a site verified so that we can show you the data in search console. So it would not be the case that you would be able to see the clicks and impressions data for a third party website that you don’t have verified.

 

Q 3:54 But previously I could see that there

 

A 5:56 Not in Search Console… Yeah, that shouldn’t be in search console and that shouldn’t be a thing that will be coming to search console either.

 

Q 4:15 Ok, another question – are there any guidelines about pagination and citation?

 

A 4:17 Pagination, we have a lot of guidelines on pagination! There is a help center article on pagination. There is a…

 

Q  4:30 Can you put the link here?

 

A 4:30 Sure, it should just be pagination. Yes, it is called “indication paginated content to Google. Oh great – Barry has posted it. Cool.

 

Q 4:57 Hi John, ok, so on the new search console, I saw the links tab. I have had a problem in the last few months where there was a clone site… It created many backlinks because this clone site used all my sites and relative links, creating likes, all of the news, creating links. So Actually, I have sites with main links that stiil, even if I disavow that domain, it still says that I have 986,000 links. The second one is 69,000. So as you can see it is ten times the normal backlinks for the site compared to the second one. So I disavowed, and it probably should have worked, but also I see in the link tab that there is another table that shows the anchor “best beach bodies” . So this is the question, this table is changed because of those back links, can this affect my search ranking? I can see myself if I search for that anchor that I am in the first position.

 

A 7:58 So I don’t think that would be a problem, one of the confusing parts there could be the count of the links that we show there. That is probably just a sign that this is a site wide link from the website. So it feels like a lot of links, but it is actually just from one website, so it is actually not problematic. Also, if you disavow those links we would not take them into account. So that is pretty easy to do, just disavow from the domain. What would not happen is that the links would not disappear [from GSC] because of the disavow. So in the links tool we show all of the link to your website, even the ones that we ignore. So if you have disavowed those links, they would not disappear from the GSC. (and also the anchors)

 

Notes by CD (10-20 min)

Q 09:30 – [question summarized] for a site that streams videos and hosts on-demand videos, is there anything other than schema which would help it appear better in the SERPs? Also, what kinds of optimizations can be done to the videos to earn a featured snippet?

 

A 11:58 – “I don’t know for sure because of the streaming side, I think that’s something I haven’t looked into very much. But in general, I think the first thing that you’d want to double check is the indexing side, like if we can actually index the pages and the content – the video content on the one hand, the streams on the other hand – it sounds like that’s not really a problem, that we can show these videos in search… is that correct? [Q asker responds “yes”] So, I think kind of the basic foundation is already set, that’s good, so we can pick up the content, we can index the videos, that’s kind of the foundation there. For structured data in general, I think you’d want to differentiate between streams and recorded content. That’s one thing that would be really important to make that clear with regards to the structured data. So that when you’re submitting videos, recorded content that’s available, like, anytime, that’s something where you can use a normal video markup for that. Or you can also use the video sitemaps to tell us more about those videos. For example, if the video content is only available in certain countries, you can tell us about that in the video sitemap. For streams, that’s where I’m not really sure what the right approach there. I believe there is also the indexing API that you might be able to use for video, for livestream content – I’m not 100% sure, you might want to double check that. But in any case, you really want to differentiate the recorded content from the livestream content, and treat them differently and make sure that they’re marked up separately in their own ways. With regards to featured snippets, I don’t think we show live streams in the featured snippets. I think, as far as I know, the kind of video previews that we show the video one boxes are more for the recorded content. I don’t think we would show streams there, but if you have it recorded then that’s something that would be eligible. [Q asker butts back in to reiterate the second part of his question, is there anything else besides schema markup to get their videos pulled for a featured snippet]. You don’t need to do anything special past that. If we can pick up the video content and show it for the video search results, like, if you go into the video mode and search, if we can show the video thumbnail that means we kind of have all the details that we need for the video content. The important parts are that we can crawl and index the thumbnail, so that you have a thumbnail image that’s not blocked by robots.txt, and that the video file that you link to is also indexable.”

 

Q 15:41 (this is still the same guy as above, I’m paraphrasing again): there is a table that shows the results of the matches, which is part of the featured snippets. Where do they come from?

 

A 16:23: “I’m not aware of any shortcuts there, it’s really just normal search ranking. For a lot of this type of content, if we can show it in a tabular form, then obviously having that kind of content in a table on the page in HTML helps a lot.”

 

Q 17:14: [Barry has a technical SEO question] So you have a URL, and in the URL they have a parameter that tells where you clicked from – which I don’t recommend, but this website has it – like they have a parameter that shows you can from “Section A” or “Section B” showing in the URL. They also have, like, page=1 of a rel=next/rel=prev set. Should they keep the section parameter in the URL, and also the rel=next/rel=prev? I think it makes more sense to probably rel=canonical… you can use canonical and rel=next/prev at the same time, right? [John says “yes”] … so, rel=canonical to the main URL without the section parameter, and then rel=next/rel=prev should just exclude the section parameter? [John says “yes”].  

 

A 18:08: “I think that anytime you can simplify it so that you have fewer URLs that lead to the same content, then that’s an approach I’d take. [Barry adds his number one approach would have been to remove the section parameters, but they said they can’t do that for tracking purposes] Yeah, sometimes you have these things for tracking, so like, you have to deal with what you have. Sometimes, what you can do is move it to a fragment instead of a normal query parameter, which usually means that we will drop that for indexing [Barry says: “I suggested that also, but they said no, so the end”].

 

Q 19:09 Since Google confirmed that none of the core updates lined up with none of the neural matching updates, is it safe to say that sites should really look to improve on quality and relevance over the long-term, when it comes to these broad core updates?

 

A 19:24: “I think that’s safe to say, regardless of any updates, right? Like, it’s always worth improving things. I have no idea what you mean with neural matching updates, it sounds like something “machine-learning-y” [waves hands around] that someone is trying to pull out and separate. We use machine learning in lots of our parts of our infrastructure, so pulling that out as, like, something specific is sometimes more artificial than really useful.

 

Q 20:04: With the March 12th Core Algorithm update, there were many other sites that positive movement that saw drops heavily during the previous update. Was there a softening with whatever rolled out in August?

 

A 20:20: “I don’t know how this would relate to the updates in August. So I mean in general when we make algorithm updates we do try and work with one state and work towards a new state and sometimes we improve things where we recognize maybe the algorithm went a little bit too far and sometimes we improve things where we recognize the algorithm didn’t go far enough. So these kind of algorithmic changes I think they’re completely normal with just algorithm updates in general.”

 

Q 20:51: Etag versus if-modified-since, which of the following HTTP headers would you recommend using for crawling budget optimization and why?

 

A21:03: “So I think these are two separate things. Like if-modified-since is a heading that we send with a request and the Etag is something that sends with the response, so I mean obviously there’s a match there in that if if-modified-since is kind of based on a date and Etag is in kind of an IDE type system, in general both of those work. We do try to use those when it comes to crawling but we don’t use them all the time so realize a lot of sites either don’t implement these or have implemented them incorrectly so it’s something that doesn’t completely change the way that we crawl the website. It helps us a little bit but for the most part sites generally don’t have to worry about that level when it comes to crawling. When it comes to users, those kind of optimizations can make a big difference, especially with users who come back regularly they’ll be able to kind of reuse the CSS or JavaScript and things like that.”

 

Q 22:20: Recent webmaster central blog post about the dates, Google said be consistent in usage, I have a client who wants to show a ‘hererichy date’ (perhaps hierarchy) — I’m probably saying that wrong — as a publishing date in article pages. Is it okay to show that in the individual parts and use Gregorian and structured data?

 

A 22:35: “Yes, you can do that. The different formats is less of a problem and infrastructure data we definitely need to have the Google Rhian dates so you need to specify it with the ease oh, what is it Pete 601, I’m not completely sure the whatever language code that we specified there. You need to use that code to specify the date and the time ideally also with time zones so that we have that sort of properly. But if within the visual part of the page you want to use your local date format, that generally fine. That’s something that we should be able to understand and pick up as well.”

 

Q 23:15: We’re a newesh business that helps businesses work with temperament agencies across the whole of UK. When we started, we dynamically created pages for combinations of locations and job roles, and due to an error we created 13 pages for each combination. We’re trying to remove all of these with a noindex metatag but they’re still there. What can we do to get this fixed?

 

A 23:43: “So in general, noindex metatag is the right approach here. It means that when we recrawl those pages we will drop them from our index. That’s perfect for something like this. What you can do if you feel that you need to get these removed faster is either use the URL removal tools if you have a clean kind of subdirectory that you can remove, or you use a sitemap file to tell us that these URLs changed recently with a new last changed date and then we’ll go off and try and crawl these a little bit faster. In general though, especially if it’s a larger website, then sometimes it just takes weeks and months for us to recrawl everything especially if we went off and crawled and indexed like some obscure facet of your website then probably those are URLs that we don’t schedule for crawling at frequently so that’s probably something that can take maybe half a year or so to completely drop out. In practice, that shouldn’t be a big problem though because we can recognize what URLs you want to focus on and we’ll focus mostly on those as well so these old ones will be indexed for a while until you can process them and drop them but they shouldn’t be affecting the indexing of the rest of your website.”

 

Q 25:04: I have a few clients that get their local knowledge panel triggered by non branded terms. Do you know what signals may influence this?

 

A 25:13: “I have no idea. So that kind of falls into the general local business, Google My Business results. So I don’t really know what would be triggering that. Probably something with the listing with the website but I mean I’m just guessing probably like how you would be guessing. I’d recommend going to the Google My Business help forum and checking out what other experts there are saying.”

 

Q 25:47: Is image search different from web search? I’ve lock an image using Googlebot image disallowed, does it mean that Googlebot default will also not crawl the image and will not show in web search?

 

A 25:59: “So, yes it is different. I kind of like how you mentioned we have different directives for the different spots. If you block and image for Googlebot image then we won’t show it in image search and we also don’t use it for web search within kind of the universal results because the universal results are filtered on being good for the image Google image search results. So if that image is blocked, we essentially won’t use that image. It wouldn’t be a problem for your normal web results so it’s not the case that when we have a webpage and we index it for web search that we need to have all these images available for web search that because we do something fancy or something around that with the images but it’s really just a matter of like we can’t crawl these images so we can’t show them in image search, you don’t have any positive or negative side effect of images not being available in normal web search. So for the normal kind of text type searches.”

 

Q 27:05: We started ranking for unrelated ingredients we don’t sell nor have the content for like pharmaceutical drugs. We haven’t been hacked been hacked not have incoming anchor text with those terms. Is this an issue, and if so, what can I check?

 

A 27:22: “So my suspicion is that maybe you were hacked after all, so I don’t know your website but usually this is not something that just randomly happens that a website starts ranking for pharmaceutical terms but rather maybe it was hacked in a way that is not completely obvious from yourself. So what you could do is go into Search Console and in the search analytics section, check out the pages that were ranking for these and use the inspect URL tool, the live fetch option there to check out what this page looks like when Googlebot looks at those pages and sometimes we can see in the visual part sometimes in the HTML you can see that actually someone added a bunch of pharmaceutical links to those pages and that’s kind of a sign that maybe there is something that was hacked on these pages. Sometimes it’s also something that just kind of lingers a little bit when a website was hacked in the past and you cleaned up the hack. So those are kind of the directions I would look there, I would not assume that a normal website just suddenly ranks for pharmaceutical terms without any reason at all. So usually there is something behind it.”

 

Summary: In Search Console under Search Analytics Section, you can see what pages are ranking and if you use the inspect URL tool you can check the HTML and see if there are Links that have been added, it could be a sign of those pages being hacked. It is possible for these to linger after it’s been cleaned up.

 

Q 28:41: Speaking about this previous keyword planner, you see when I click on — [John:] that’s the Adwords tool. [Question]: I will ask in the new infrastructure, this option is not available. Is there any alternative of this?

 

A 29:16: “I don’t know. So that’s the Adwords keyword planner tool so I don’t know what the plans are from the Adwords side. That’s not something that is related to Search Console. That’s really purely one from the ads team.”

 

Q 30:45 When disavowing backlinks asides from disavowing the spammy backlink itself, do we need to disavow all other URLs it is referred from?

 

A 30:55 No, so if you’re a problematic link, then that link won’t be taken into account. If you think the whole website, all of the to your website from someone else’s website are problematic then you disavow the whole domain. That’s the easiest way to cover your bases there but you don’t need to follow back the chain to all other links pointing to that page with the bad link. It’s really often just that one link that you need to disavow. And oftentimes you don’t really need to disavow things, if it’s not the case that you have a manual action or you look at your links and say, well Google is going to give me a manual action next week because it looks so bad, then usually you don’t need to use the disavow tool at all.

 

Our note: Google is really good at parsing through and finding spammy links and ignoring them. The most important thing to look out for is if you have any unnatural links that were created for SEO purposes.

 

Q 31:45 How does Google treat backlinks from website analysis websites or user profiles or user generated content, automatically generated content sites?

 

A 32:00 For the most part, we ignore those, because they link to everything and it’s easy to recognize so that’s something we essentially ignore.

 

Q 32:10 Our business is about software development and DevOps services, we have a separate page for service we provide and blog pages. The articles in the blog have diverse topics starting from how to hire a good team of programmers to how to beat procrastination and the role of women in business. Im worried that such a broad spectrum of topics could dilute the overall topics of our website and as a result it could be difficult to rank for our transactional keywords that sell our service. Is that a problem?

 

A 32:50 No I don’t think that would be a problem. I think that vaguely these are all related. The thing that I would focus on more is if people go to those pages and you have absolutely no value from of those users going to those pages, then maybe that’s not a good match from a business point of view.  

 

Q 33:45 Question about rich snippets: Our company and our offers based in Switzerland in our case it’s often that we see websites from Germany with prices in Euro. From my point of view that’s not really useful, can we use structured data to show up instead of those other websites?

 

A 34:15 As someone in Switzerland I kind of agree that sometimes seeing a lot of sites from Germany is not very useful. But essentially if we don’t have good local content and if we can’t tell that the use it looking for something from a local point of view then it’s hard for us to say that these pages shouldn’t be shown in the search results. So for example if we can recognize that the use is really looking for something local and we have some local content then we will try to show that.  But if can’t tell and they are looking information of a certain topic then maybe we should just show info on that topic regardless of whether or not its for Germany or Switzerland or another country. So that’s not something you can clearly specify is structured data. What you can do is use the Hreflang markup if you have multiple language and country version, you can specify that for use and you can also use the geo targeting featured in search console to tell us that your website is specifically targeting Germany or more specifically targeting Switzerland and we’ll take that into account when we recognize that a user is looking for something local.

 

Q 35:50 I see a disturbing amount of link networks and nefarious link building schemes being used by vaping companies. I reported these as suggested but is there anything else we can do?

 

A 36:00 So reporting them in the Search console, spam report form, that’s a good place to go. That helps us to better understand that these pages we need to review from a manual web spam point of view. It’s never really guaranteed that we drop those pages completely. So when it comes to competitive areas, what we’ll often see if that websites do some things really well and they do some things really bad and we try to take the overall picture and use that for overall ranking. For example it might be that one site uses keyword stuffing in a really terrible way but the business actually is really fantastic and people really love going there, they love finding it in search, we have lots of really good signals for that site, so we might still show them at number one even though we recognize that they’re doing keyword stuffing. A lot of times what will happen is also that our algorithms will recognize these kind of ‘bad states’ and try to ignore them. So we do that specifically with regards to links, also keyword stuffing, some the other techniques as well where if we can recognize that they’re doing something really weird with links or with keyword stuffing, then we can kind of ignore that and just focus on the good parts where we have reasonable signals that we can use for ranking. So what could be happening here, I didn’t look at these sites or specifically what was reported but what could be happening is they’re doing really terrible link building stuff we ignore most of that and they’re doing somethings fairly well on the side and based on the thing that they’re doing fairly well we’re trying to rank them appropriately in the search results. It’s kind of frustrating when you look  at this and say, I’m doing everything right and why are these guys ranking above me, on the other hand we try to look at the bigger picture and try to understand the relevance a bit better. And its sometimes something that works in your favour as well because maybe you’ll do something really terrible on your website and Google’s algorithms look at that and say ‘oh, that looks like a mistake, maybe we can just ignore that and focus on the good parts of the website and not remove that site completely from search because they found some bad advice online’.

 

Q 38:56 – We help companies find recruitment agencies to work with and broadly what we have on our site is it dynamically creates pages for combinations of jobs and locations. So you might type in a programmer in Oxford and it’ll show you a page saying program in Oxford. And so I had these dynamic pages created and I put them into the search engine and when I first did this I saw a real uptick in volume coming from search because previously I’ve been relying on paid clicks. Then we made this mistake where we had every single link had 13 different pages and I’ve tried to get them out. What’s happened as I seem like you know my impressions just go down over time to zero and literally the only thing I’m coming up for is my brand name. I went to the old version a search console and said can you ignore these web links for me and it still hasn’t made any difference. I still seem to be having nothing coming in and I’m kind of very nervous about what you said, saying it could take months for this to come through, because as a start-up I know it’s not Google’s business but it’s very hard for me to get traffic in. Is there anything could be doing?

The second part of this question, the way I do my pages is I have in my database I have a little bit written about a role and a little bit written about a town and I combine them together on a page to say, here are programmers in Oxford. So a programmer or job spec will say these things and you should pay, this amount of money for a programmer, and about employment in Oxford. But of course, the piece about Oxford is repeated on every single job role and then the piece about programmers is repeated on every single location. Does that mean those pages will get deranked?

A 40:52-  I would be kind of worried about the setup that you have there. So that sounds a lot like you’re just taking a lot of chunks from the database and just automatically generating combinations of those pages and it feels to me that a lot of these variations could be very thin,  very low quality and that maybe you don’t have a lot of content for those variations and you quickly run into the situation where you create thousands of pages with all these combinations and the actual amount of content there is very minimal. So what could happen on Google side is, we go off and crawl and index all of these pages and we think oh like there’s so much stuff here we can kind of pick up all of this stuff but in a second step when we look at the details were like, actually there’s not a lot of really useful information here and especially if you’re also taking the jobs maybe from existing feeds from other websites. Then what we see is essentially just a mix of like some some city information, some role information,  job listings which we’ve already captured somewhere else and we kind of look at that overall and say like, what’s the additional value of having your page indexed in addition to the things we already have indexed about all of these combinations and that’s I think the tricky part there and that’s kind of where you need to jump in and say, actually the might the way I have things combined are completely different from everything else. There’s a lot of value that I add through these pages that when people go to these pages they clearly see that this is not just like a combination of existing feeds but actually there’s something really fantastic here that nobody else has captured so far.

Barry chimes in about a similar story.

I could imagine for ads maybe that’s that’s slightly different than if you were if you have ads landing pages but especially from an indexing from a quality point of view that’s something where I would expect that you would have a really tough time unless you’re really providing a lot of like really strong value in addition just to those listings there.

I think what you really want to do especially for search, is make sure that you have a significant amount of value there that’s much more than what other people have for that kind of content. So in particular one approach I usually take is like, if you bring this problem to me and you say that your site is not ranking well and I take that problem to the search ranking engineers and they say, well actually we have this combination already index five times why should we add this other variation and I don’t have a good reason to tell them like, instead of these five you should take this new one. Then that’s something that’s that’s really hard for me to justify and that means that in the end you’ll be competing against a lot of kind of established players in that scene and you’re kind of a new player in that scene and you don’t have significant unique value that you’re adding to that. So it’s really from our algorithmic point of view it’s like why should we spend time on this website that’s not significantly changing the web for our users.

Q 49:15 – This is relating to structure kind of website this is a magazine you know they have is in a Drupal site in its I’m trying to figure out the best structure since there is articles and every category and those articles could be repeated through categories but there’s also landing pages for each category and subcategory so I’m kind of trying to avoid such a big mess and try and condense that do you have any advice for that

A 49:52 -I guess in general what you want to make sure is that the pages that you do have index for search that they’re able to stand on their own. So if you have all of these different combinations of like categories and landing pages and things like that. Then I try to figure out which of those variations you think you want to keep first switch and no index the other ones.

 

 

 

Google update newsletter

Want an update when Google makes a big algorithm change or other announcement? Sign up here!

This is a weekly newsletter. We will never send spam. Unsubscribe at any time. Powered by ConvertKit
Leave a Reply

Address

300 March Rd., Suite #300
Kanata, ON
Canada
Website: https://www.mariehaynes.com
Email: Contact the MHC team at help@mariehaynes.com

Privacy Policy
Referrals

HIS Web Marketing

Marie Haynes is the founder of HIS Web Marketing, formerly at www.HISWebMarketing.com. In 2015, she rebranded the company to Marie Haynes Consulting Inc.
Stay updated on Google algorithm changes and search news. Sign up here for Marie Haynes' newsletter.