Want the confidence to offer website services or SEO?
Give us 5 minutes a day, no tech knowledge required.
1 quick-tip email a day, and soon you’ll have the confidence to SELL and MANAGE website or SEO services to your clients.
A 'noindex' meta tag is used on a page to prevent it from search indexing.
This is one of the most common 4XX errors and indicates that the requested URL does not exist.
A redirecting URL specified as canonical can be misinterpreted by the search engines; such conflicting instruction can be ignored. As a result, wrong (non-canonical) page version can be indexed.
When the search engine crawler is not able to access the specified canonical page, this instruction will be ignored, and wrong (non-canonical) page version can be indexed.
5xx errors indicate a problem with your web server. You need to check with your hosting provider or with your web developers because your server may be overloaded or misconfigured.
Only use this tag on the pages you don't want search crawlers to follow links on. Otherwise, you should remove this tag.
Pages where a 'noindex' directive is specified in the meta tag and in the HTTP response header (X-Robots tag).
When a non-canonical page is specified as canonical page, this creates a so-called "canonical chain." Ex. Page A links to Page B that links to Page C from their 'rel=canonical' elements.
It is enough to implement a 'nofollow' either in the HTML meta tag or in the HTTP header. You don't need both.
These pages will not be shown in search engines' results. But since they don't have a 'nofollow' tag, all links on them are supposed to be followed by search engine bots and pass "link juice."
A 'noindex' directive instructs search engine crawlers not show a page in search results. A 'nofollow' directive instructs search engine crawlers not to follow the links on a page.
Similar or duplicate pages of your website must have a 'rel=canonical' attribute to instruct search engines to show the most authoritative (canonical) version of the page in search results.
Similar or duplicate pages of your website must have a 'rel=canonical' attribute to instruct search engines to show the most authoritative (canonical) version of the page in search results.
Even though Google announced that any redirection method is good and will pass PageRank, Googlebot is not the only visitor of your website.
Redirect loop happens when a URL redirects to itself or when a redirect chain redirects to one of the URLs within the chain. This creates an infinite loop of redirects.
Redirects that point to a page returning one of the 4xx or 5xx HTTP response codes. These URLs can be accessed neither by your website visitors nor by the search engines crawlers.
Chaining redirects may inflict damage on the user experience, slowing down the page loading speeds. Redirect chains complicate your website's internal linking for the search engine crawlers.
Google understands this client-side redirect. However, Google needs to parse the page first to see the destination URL, which can take some time. Meta refresh redirects are not supported by every browser. They may confuse your visitors or raise concerns about your website's security.
Both 302 and 301 redirects pass PageRank as announced by Google. 302 redirect is a temporary one by definition and should not be used where the redirection is permanent. If the redirection is permanent, replace the 302 redirects for these URLs with the 301 (Moved Permanently).
Orphan pages of a website have no incoming internal links. Search engine crawlers can only discover such pages from the sitemap file or from external backlinks.
If a page has no outgoing links, it is a "dead end" for both website visitors and search engine crawlers.
The URLs of your website should only contain the post name and preferably not a date or a string of numbers.
Both www and non-www versions of your URL are redirected to the same site.
Pages on your website that link to internal or external URLs returning one of the 4xx or 5xx HTTP response codes. These links are widely known as "broken links."
The destination page of the redirect has no incoming internal links. In this case, there is no way your website visitors can access it from your website apart from a redirected URL.
If an internal link on your website brings people to an HTTP URL, modern browsers will show a warning about a non-secure page. This can damage your overall website authority and user experience.
A mixture of followed and nofollowed links to a page is most likely a mistake. An indexable page could get more "link juice" if all internal links to it were followed; followed links to the pages you don't want to be crawled and indexed simply waste the "link equity."
Search engine bots won't be able to reach (and thus index) the pages via nofollowed links.
In case the URL has no incoming internal links, there’s no way for people to reach it while browsing your website.
For redirecting URLs on your website, this is not a problem, although we recommend linking to the destination page directly. However, a redirect on an external page you link to requires your attention.
The number of internal links pointing to a page is a signal to search engines about the relative importance of that page.
Search engine crawlers will not follow (crawl) the "nofollow" links on your website and PageRank won't be passed.
HTTPS is one of the ranking signals for Google. It is recommended to adopt HTTPS across your website.
We recommend using the Core Keyword as close to the beginning of the SEO Title as possible.
Generally recommended title length is between 50 and 60 characters (max 600 pixels).
Generally recommended title length is between 50 and 60 characters (max 600 pixels).
Google sometimes uses tag content to generate snippets, if they think they give users a more accurate description than can be taken directly from the page content. Facebook, for example, will use tag content for link preview if the page has no 'og:description' tag.
A general recommendation today is to keep your page description between 110 and 160 characters, although Google can sometimes show longer snippets.
A general recommendation today is to keep your page description between 110 and 160 characters, although Google can sometimes show longer snippets.
While it is possible to rank for keywords that are not in the Page URL, it definitely helps to have the Core Keyword present in the URL.
The Core Keyword should appear closer to the beginning of the content.
It's important to have the Core Keyword present in the content more than once.
This is a highly debated topic, but we recommend using at lease 600 words for every page on the website.
Pages with low word count are not likely to give good coverage of the topic for the search engines.
The alt attribute is used to describe your image. Search engines will use it to understand the content of your image files. Also, this text will be shown on your page if the image cannot be displayed.
It is recommended to add the focus keyword as part of at least one subheading in the content.
Pages on your website where HTML code took a long time to load.
Response from the server was not received on time when requesting a page or resource.
Without a meta description, you're missing the opportunity to present the summary of your page content to the search engines.
Page title will be displayed in search results and it will show up as a name of a browser's tab for those who visit your web page.
Although multiple title tags probably wouldn't cause problems for Google today, this is always a confusion because only one title will be picked to be displayed in the search results and in the browser's tab. Multiple title tags are a relic of old black-hat SEO and won't add authority to your pages.
A meta description tag is generally used to inform the search engine with a short, informative summary of what your page is about. High-quality descriptions can sometimes be displayed in Google's search results as search snippets, helping you get higher click-through rates from SERPs.
H1 tag is the top level heading of the page. Although it is not as crucial as your page title, an H1 heading is a strong component of your on-page SEO. It helps search engines better understand the content on your page and its overall topic.
It is important to have a publicly accessible blog for your visitors and crawlers.
While it isn't used on all websites and themes, some search engines and website directories will use the text in this field.
Although HTML code is pure text, it may slow down your pages, when its size is excessively large.
To reduce the size of data transferred from the web server to the user's browser, compression should be used for text-based assets: CSS, JavaScript, and HTML.
Although Google might not be using the HTML lang attribute today, other search engines and programs, such as screen readers, do to understand the language of the page.
Announcing different pages for the same language (or language-location) in hreflang annotations can confuse search engines.
This gives contradictory instructions to search engines as of which version of a page to show based on user's language preferences.
Hreflang helps search engines to point users to the most appropriate version of your page, depending on users' language and region.
Confirmation (return) links are missing for the pages declared in hreflang annotations.
To indicate multiple language/location versions of a page to the search engines, each language version of a page must list itself as well as all other language versions.
Linking to a non-canonical version of a page from hreflang annotations can mislead search engines.
If hreflang URL does not point to a valid live page, hreflang annotations may be ignored or not interpreted correctly.
Pages with different language codes declared in HTML language attribute and in hreflang annotation for the URL.
Although Google might not be using the HTML lang attribute today, other search engines and browsers do.
Pages where the language attribute is missing.
Broken images on your pages will negatively affect the user experience, while search engines will not be able to index these images in their search results.
Images often account for most of the page size and thus can be the main reason for slow pages on your website.
The alt attribute is used to describe your image. Search engines will use it to understand the content of your image files. Also, this text will be shown on your page if the image cannot be displayed.
This forces web browsers and search engine crawlers to make an additional HTTP request in order to reach the destination image URL. On a vast scale, this can increase page loading times for your website.
Some image URLs on your website redirect to another URL.
Response from the server was not received on time when requesting a page or resource. This may damage your website crawlability (and thus indexability) and have a negative impact on the user experience.
These URLs can't be accessed by your website visitors or by the search engines crawlers. Crawlers will be forced to abandon the request while people will most likely leave your website.
This is one of the most common 4XX errors and indicates that the requested URL does not exist.
4xx HTTP status codes indicate that the requested page or resource cannot be accessed. 401 - Unauthorized, 403 - Forbidden, 408 - Request Timeout, and 404 - Not Found are the most common "Client Errors".
Mixed content occurs when initial HTML is loaded over a secure HTTPS connection, but resource files (images, CSS, or JS) are loaded over an insecure HTTP connection.
JavaScript files that cannot be loaded. Broken JS files will negatively impact user experience on your pages. Besides, they can lower your pages' authority in the eyes on the search engines. Google, for example, is able to understand and render JS files.
This issue is an instance of mixed content that occurs when HTML pages load over a secure HTTPS connection but link to resources (images, CSS, or JS) over an insecure HTTP connection.
Some pages on your website link to JavaScript files via a redirect. This forces web browsers and search engine crawlers to make an additional HTTP request in order to reach the destination JS file URL. On a vast scale, this can increase page loading times for your website.
If you decide to keep the links to redirecting URLs that do not belong to your website, make sure that the destination JS files are relevant.
Some browsers block insecure resource requests by default. If your page depends on these insecure resources, then your page might not work properly when they get blocked.
Some pages on your website link to CSS files via a redirect. This forces web browsers and search engine crawlers to make an additional HTTP request in order to reach the destination CSS file URL. On a vast scale, this can increase page loading times for your website.
Some pages on your website link to CSS file URLs that return one of the 4xx or 5xx HTTP status codes to our crawler. Broken CSS files will not apply the necessary styles to your pages.
Some CSS files' URLs on your website redirect to another URL. This forces web browsers and search engine crawlers to make an additional HTTP request in order to reach the destination URL. On a vast scale, this can increase page loading times for your website.
CSS files on your website that are larger than 15 kB.
CSS files are plain-text files used for formatting content on web pages. If a CSS file cannot be accessed, the content on your web page will not be rendered properly, damaging the user experience on your website.
Pages with a noindex meta tag included in sitemap. Sitemap must list all the pages you want search engines to crawl and index, while a noindex meta tag instructs search engine bots not to index a page. Such a combination is contradictory.
Sitemap must list all the pages you want search engines to crawl and index. Redirecting URLs in sitemaps can result in indexability issues on your website.
This can happen when you set up a redirect on your website but search engines have not noticed it yet. Before search engines re-crawl the redirecting URL, they will be showing it in search results.
If a noindexed page receives organic search traffic, search engines are not following this directive for some reason.
403 (Forbidden) HTTP response code indicates that the crawler was not permitted to access the resource during the crawl. Given that the page receives organic traffic, it might have changed its status to 403 not so long ago.
This can happen when you deleted or moved pages without setting up redirects while search engines have not yet removed them from their index.
URLs in the sitemap file did not get the response from the server on time.
4xx pages in the sitemap send a confusing signal to search engines, asking them to crawl and index “dead” or forbidden pages. This can result in indexability issues on your website.
Search engines will not be able to crawl (and index) the pages they do not have access to.
5xx URLs in sitemaps cannot be accessed by crawlers. This can result in search engines ignoring your sitemaps. In this case, you might end up with some indexability issues on your website.
Sitemap files must list the pages you want search engines to crawl and index. All pages listed in a sitemap are suggested as canonicals for the search engines.
Most servers are set up to ignore a double slash in the URL path. However, such URLs may be confusing for search engines as they will be interpreted as stand-alone URLs, which can result in duplicate content issues.
Alt Text Verification
Link Audit
Site Structure Audit
Video content is now more important than ever, but video is also the most difficult/expensive form of content to create. The good news is that we can convert your existing blog posts into explainer videos!
Video content is now more important than ever, but video is also the most difficult/expensive form of content to create. The good news is that we can convert your existing blog posts into explainer videos!
Video content is now more important than ever, but video is also the most difficult/expensive form of content to create. The good news is that we can convert your existing blog posts into explainer videos!
“I guarantee your team will love working with us. Try a block of credits, and if you are unhappy for any reason, we’ll refund all unused hours, no questions asked.”
Give us 5 minutes a day, no tech knowledge required.
1 quick-tip email a day, and soon you’ll have the confidence to SELL and MANAGE website or SEO services to your clients.