Digital Marketing Experts

Technical SEO Audit

The following list considers some of the major factors we consider when performing a technical site audit.

The search engine crawler starts with a list of previously discovered URLs, as well as those supplied by an XML sitemap. The process is to go out and discover new and changed content.

You can give some directives to a crawl bot by placing directives they should follow in your Robots.txt file, this is where crawl bots will begin.

Errors

Crawl errors are usually the result of internal links that are broken. There are a multi-tude of reasons as to why they might be broke. Finding these broken links and fixing them is essential to aid crawl bots.

Limiting crawl errors will aid the indexation process of your website, and so therefor will lead to a greater presence on the web for relevant search terms.


Mobile-First Indexing means that Google uses the mobile version of the content for indexing and ranking your website. This is a total shift from when they were recently using the desktop version of your website.

Check you are Optimised for Mobile-First

Google Search Console now has good insight in to Mobile First Indexing. We will thoroughly investigate to ensure that your website is fully optimised for Mobile-First. If it isn't, we will advise on the best steps forward and lacking visible content on mobile can have a detrimental impact on your websites performance.


A webiste using HTTPS is generally more secure than a non-https website. This is important for any website that many be offering online payment options.


A strong site speed is essential in order to attain good rankings for your website. Google have stated that site speed is a ranking factor.

To aid in your quest to improve your websites speed, Google have introduced a PageSpeed Insights Tool. This tool gives us a lot of insight, it is powered by Lighthouse and provides information in a relatively easy to follow format.

Site Speed Checklist

In order to improve your websites speed there is a checklist we run through:

  • Server Response Time
  • Optimise & Reduce Image Size
  • Minimise Render-Blocking JS & CSS
  • Limit the Number of HTTP & Resource Requests
  • Set a Browser Cache
  • Reduce Redirects & Eliminate any Loops
  • Avoid Unnecessary Loading

This can form a major part of any technical audit, as it's essential the Site Speed efficiencies are maximised to ensure all-round strong performance.


XML Sitemaps are a way to tell search engines, including Google, about pages on your site we might not otherwise discover. In its simplest terms, a XML Sitemap is a list of the pages on your website. Creating and submitting a Sitemap helps make sure that Google knows about all the pages on your site, including URLs that may not be discoverable by Google's normal crawling process.

An HTML sitemap allows site visitors to easily navigate a website. It is a bulleted outline text version of the site navigation. The anchor text displayed in the outline is linked to the page it references. Site visitors can go to the Sitemap to locate a topic they are unable to find by searching the site or navigating through the site menus.

Sitemaps are particularly helpful if:

  • Your site has dynamic content.
  • Your site has pages that aren't easily discovered by Googlebot during the crawl process—for example, pages featuring rich AJAX or images.
  • Your site is new and has few links to it. (Googlebot crawls the web by following links from one page to another, so if your site isn't well linked, it may be hard for us to discover it.)
  • Your site has a large archive of content pages that are not well linked to each other, or are not linked at all.

Although Google doesn't guarantee that they’ll crawl or index all of your URLs, the data in your Sitemap is used to learn about your site's structure, which help Google to do a better job crawling your site in the future.


A robots.txt file is a basic text file that can stop web crawler software from crawling particular pages of a website.

The file will usually contain a list of commands – “Allow” and “Disallow” that tell the crawlers which pages they can or cannot retrieve.

Benefit of a Robots.txt file

They are used to give instructions about their website to web crawler robots and allow us to prevent pages or folders from being indexed in serps (search engine result pages). There are a few other ways you can prevent pages or areas of your website from being crawled by using in style methods.

How to Check if a Robots.txt file is implemented

You can easily check if a website has a robots txt file by using an application such as SEO Quake or by simply typing ‘robots.txt’ at the end of the URL in the browser address bar.

How to Create and Upload a Robots.txt file

The robots txt file should be placed in the top level of your directory on your server, always save it in lowercase – ‘robots.txt’. It can be created in notepad editor and saved as a txt file before uploading.

The following is a snippet from a robots txt file that excludes all robots from the entire server:

User-agent: *

Disallow: /


Schema aids search engine bots in understanding what your content is about. It's a developing extension of HTML and implemented correctly will aid exposure to your website.

What is Schema used for?

There are many uses to Schema; any sort of data on your website will have some association with itemscope, itemtype and itemprop. Here are the most common elements:

  • Events
  • Businesses and Organisations
  • Products
  • People
  • Recipes
  • Reviews
  • Videos

Canonicalization for SEOs refers to normalizing (redirecting to a single dominant version) multiple URLs.

What is Canonicalization?

A well placed, and correctly placed canonical tag solves the problem for search engines when they are trying to decide which URL to value above others which may have exact or similar content on them.

SEO Best Practise

Canonicalization refers to individual web pages that can be loaded from multiple URLs. This is a problem because when multiple pages have the same content but different URLs, links that are intended to go to the same page get split up among multiple URLs. This means that the popularity of the pages gets split up. Unfortunately for web developers, this happens far too often because the default settings for web servers create this problem. The following lists show the most common canonicalization errors that can be produced when using the default settings on the two most common web servers:

Apache Web Server:

  • http://www.example.com/
  • http://www.example.com/index.html
  • http:/example.com/
  • http://example.com/index.html

Microsoft Internet Information Services (IIS):

  • http://www.example.com/
  • http://www.example.com/default.asp (or .aspx depending on the version)
  • http://example.com/
  • http://example.com/default.asp (or .aspx)
  • or any combination with different capitalization

Our thorough technical audits will fully evaluate the canonical situation of your website and fully ensure best practise is adhered to.


Pagination issues are rarely similar, they span across various web platforms whether it’s a blog, forum, news or e-commerce site.

Pagination allows SEOs to handle and organise a lot of data within a web page, making it more manageable and user friendly. So, in its most basic sense, pagination occurs when a website segments content over the span of multiple pages.

How it impacts SEO

Pagination resolves issues for visitors to a website, it allows for a lot of information to be accessed more logically and sensibly. However, this doesn’t apply from an SEO angle, check out our latest blog post where we explore three of the most common SEO problems that can occur when pagination is applied to a website.


Google favour URLs that are short and contain sensible naming conventions. It makes sense to give a URL a sensible name and the shorter the better as this will aid indexable efficiencies.

Lexicon Approach

A URL structure should be mirrored by using the approach an efficient library should take. A logical and sensible layout of your website will ensure a URL structure can be implemented that considers naming conventions which have been carefully researched. We want these naming conventions to consider search volume, cannibalistion factors and all-round ease of use for indexation reasons.

If you are considering a new website or require enhanced areas to your existing website, then give us a call.