A Reddit discussion in r/TechSEO began with the brief headline of “hreflang in jQuery”.
A kind user replied with the following advice “If you habe (have) hreflang tags in the xml file, you don’t need them on page or via js. Just make sure your sitemap is submitted to search console and google identifies the hreflang tags (usually takes a few weeks for a new submission)”
Google’s John Mueller then joined in the discussion and was in complete agreement with the advice given:
Agreed — if you have it (hreflang) in the sitemap, just use that. Adding a second set via JQuery just makes it much harder to diagnose, find, & fix errors.
This advice is pretty clear, and prevents sites causing any confusion as far as Google are concerned.
A Reddit thread claimed that using CloudFlare has hurt a site’s SEO, however Google’s John Mueller has disputed this claim.
John confirmed that the content delivery network (CDN) would not have been an issue for the site:
You would not lose ranking because of moving to a CDN.
John also confirmed that shared IP addresses are also not a problem:
Yep – shared IP addresses are no problem.
This information backs up confirmation from John back in 2018, when he shared this same message on Twitter:
Shared IP addresses are fine for search! Lots of hosting / CDN environments use them.
This is fairly consistent and conclusive information from John, which should allay any fears of using a CDN or shared IP.
There are a number of ways to ensure a page does not appear in search results, and just one way is to make the page private within your content management system (CMS).
Although just because you don’t promote or link to a page, then don’t assume it won’t be picked up by search engines.
If you wish to hide a page from the search results until it’s really ready to be launched, then one way to do so is through the URL Removal Tool.
As Google’s John Mueller explained on Reddit, using this tool wouldn’t stop the page from being crawled, but it would prevent it from appearing in search results:
The URL “removal” tool is basically a “URL temporarily hide in the search results” tool — it’s documented pretty much like that too. It doesn’t affect crawling or indexing, it just affects what’s shown in search results. Removing or canceling a removal from a verified Search Console account takes less than a day (it’s usually much quicker).
This is useful to know for those who have yet to use the tool, or didn’t fully understand its purpose.
During a Reddit conversation regarding Mobile-First Indexing, Google’s John Mueller offered some insights into the impact of not having a mobile friendly website.
John confirmed the following:
For ranking, if a page isn’t mobile-friendly, we won’t rank it as highly for users on mobile devices. Users on desktop devices will still see it “normally” (where, of course, “normally” doesn’t mean the ranking will never change).
This makes perfect sense that Google would only reward websites with higher mobile rankings, if those pages are actually optimised for a mobile device.
John also suggested that even if your offering isn’t targeted to mobile users, you could still look to make your page appealing to them.
After all, Google have to protect the user’s experience.
Some site owners have noticed that they have a Discover section within Google Search Console, whereas other publishers are missing this information.
The question was put to Google’s John Mueller on Twitter, whether this is in fact a sign of quality.
John confirmed that rather than taking the lack of Discover information as a sign of quality, it is instead missing due to Google not having enough data for the site in the Discover feed:
It’s not a quality judgement — it’s basically just that we don’t have enough data for your site in the Discover feed
Whilst most SEOs and site owners wouldn’t have believed that having Discover in place in Search Console was a sign of perceived quality, it’s still useful to receive that clarity.
Websites link to others from various places on their site, but are all links created equally?
The answer – according to Google – is no.
Google’s John Mueller summarised in this Google Webmaster hangout from 2016, that having links to or from your site within the main body of the text is classed as primary content.
However linking to or receiving links from other sites from boilerplate (footer, sidebar, etc) locations may carry less weight.
That’s because these links can be viewed as less important in the eyes of Google, when compared to primary in-content links.
There is a difference, so bear this in mind when reviewing the location of incoming links to your site in future.
Having duplicate or similar content on various pages can cause issues with ‘unwanted behavior’ from Google.
If you have duplicate versions of the same page, Google will classify one of the pages as the main version or original, and run with that. Unless of course you set one URL as the canonical version.
In Google’s own words:
If you don’t explicitly tell Google which URL is canonical, Google will make the choice for you, or might consider them both of equal weight, which might lead to unwanted behavior
If you do not choose one URL as the canonical version, then Google will themselves:
Choose one URL as the canonical version and crawl that, and all other URLs will be considered duplicate URLs and crawled less often.
This underlines the importance of the correct use of canonical URLs, as we also covered in this post.
In response to a conversation on Twitter, and after being introduced to the conversation by Glenn Gabe, Google’s John Mueller confirmed that Google does not index parts of a specific page independently:
We don’t index parts of a page independently, we index the pages as a whole, and try to understand the context of the content there. Scrolling to a part of a page when we know that’s where the snippet was from makes a lot of sense regardless of indexing.
This is no real surprise, as most people already believed that Google indexed either pages in full, or not at all. Though it is still useful to receive this confirmation from John.
Google has repeatedly claimed to be often updating its algorithm, in order to try and improve the quality of their search results.
Whilst some people have guessed this figure could be in the range of hundreds to thousands of updates per year. In July 2019, Google’s Danny Sullivan confirmed that Google made over 3,000 updates in 2018:
Our search algorithms are complex math equations that rely on hundreds of variables, and last year alone, we made more than 3,200 changes to our search systems.
That equates to almost nine changes per day, or 61 updates per week.
This figure will be eye opening to many, but really does highlight how often Google are working on and tweaking their algorithm.
Google’s John Mueller was asked on Twitter whether the best solution is for a site to use a trailing slash or no trailing slash at the end of URLs.
John confirmed that the best solution is to use either version, but to then be consistent:
The best solution is to be consistent and only use one version of a URL. Link to that version, redirect to it, use it in sitemaps, use it for rel-canonical, etc.
This means that if you have URLs on your site that do not end in a trailing slash, then all internal links, redirects, canonicals, etc. should not have a trailing slash either.