Technical SEO: A Beginner’s Guide

Technical SEO is an essential part of good search engine optimisation. Depending on the choice of analogy, it’s the backbone or foundation on which a website stands or falls. If you’re a SEO novice though, it can be daunting.

What is technical SEO? How does it differ from on-page or off-page SEO? What do site errors mean? How can you improve page load speeds and why do they even matter? Why is a site map important? How do you fix broken links?

These are just a few of the questions you may have.

Don’t worry, I’ve got it covered!

In this guide, I’m going to walk you through some of the most common technical SEO terms and issues. I’ve included links to further resources to help you troubleshoot any problems you might have.

These resources should also help you to feel more confident about discussing technical SEO if you’re working with a web developer or you decide to bring an SEO specialist on board.

What is Technical SEO?

Technical SEO covers all the ways in which you can make it easier for search engines to crawl, index and render (i.e. interpret the code into what we see displayed) your web pages.
In other words, it’s about making sure that your fantastic content is readable by search engines as well as humans.

Its also a fundamental part of creating a great user experience (UX) as people want to visit web pages that load properly and quickly and are easy to navigate.

How important is technical SEO?

In 2012, Google officially launched the Penguin ‘webspam algorithm update’, which specifically targeted link spam and manipulative link building practices.

In many ways, this algorithm was a turning point for SEO as it marked a shift from ‘gaming’ the search engines to creating websites that ranked highly by putting the UX first.

Websites became more sophisticated and interactive, presenting a challenge in terms of page speed, crawling, indexing and rendering all of the elements.

Technical SEO came into its own as a direct response to this, so much so that it’s now a discipline in its own right.

The most important elements of technical SEO

Of course, as a beginner, you don’t need to specialise, but it is helpful to understand some of the most important elements of technical SEO:

Code/programming languages

If you use a Content Management System (CMS) like WordPress or a site builder like Wix or Squarespace, you can potentially build an entire website without needing to touch a line of code. This is because the coding is done for you as part of the theme. There are also many plugins that can tweak the coding to achieve bespoke functionality.

However, it’s helpful to have a broad understanding of what the main programming languages do and how search engines view them.

  • Structure

Hypertext markup language (HTML) provides the structure/framework/skeleton around which a website is built. It tells search engines what a web page says, i.e. its titles, body content, alt tags, etc.

  • Appearance

Cascading style sheets (CSS) tell search engines and browsers what a web page should look like. They’re all about making the skeleton underneath look good. This includes the colours, fonts, style and size, spacing, etc.

  • Action

JavaScript determines how a web page should behave. It can make a page dynamic, interactive, add pop-ups or display third party ads, and much more.

You can read more about these three programming languages in Moz’s guide to technical SEO.

When a user requests a URL on your domain

When a user requests a web page (either by typing it directly into the browser or clicking on a link to it), the browser makes what’s known as a Domain Name Server (DNS) lookup request to convert the domain name into its Internet Protocol (IP) address.

In other words, it turns the human-readable web address into a machine-readable number-based address.

The browser then asks for all the code to ‘render’ (i.e. make) the web page. This most commonly uses the three programming languages mentioned above.

The server sends the resources and the browser puts everything together, following the instructions given by the code. The result is shown to the searcher.

If a page does not display properly, it’s usually because some elements within the code have failed to render properly. Often JavaScript (and some other resources) is indexed on a ‘second pass’ by Google, which can be weeks after the rest of the page is indexed.

Checking how Google renders a web page

Not sure whether your web pages are displaying properly?

You can check whether Google renders a web page properly by checking it using Google Search Console’s URL Inspection Tool.

Mobile-friendly design

Approximately 52% of global internet searches are made on mobile devices but this percentage varies from country to country. For example, 65% of searches are made on mobile devices in Asia.

Wherever you live, if you rely on local searches for your business then the percentage of mobile views may be much higher.

For this reason, it’s absolutely imperative that your website is mobile-friendly and that it displays properly on any device or screen size. This is known as ‘responsive’ web design.

Many companies these days adopt a ‘mobile-first’ design strategy where they design a website for the smallest mobile screens first and then scale the experience up to work on laptops and desktops too.

The crucial thing is that all visitors should have a comparable experience of visiting your website, whatever device they use to view it:

  • Images and text should display properly
  • Text should be readable and all content easy to consume
  • Call to action buttons should be big enough to read and press
  • Navigation should be clear and easy to use
  • Forms should be short
  • A search facility helps visitors access content

How to test whether your website is mobile-friendly

Google’s Mobile-Friendly Test tool is the best starting point for checking whether your website offers a good UX to mobile users.

your website is mobile-friendly?

Simply type in the URL that you want to check and hit Test URL.

This will bring up a report for the URL, including any issues that you might need to address to improve the mobile experience:

Technical SEO - Mobile-friendly design

The MobileMoxie Page-oscope is a handy free tool too that allows you to choose two different devices at a time to see how your website displays on them.

It’s worth checking your Google Analytics to see what devices visitors are most commonly using to view your site. Are there any rendering issues on these devices?

Keep an eye on Google Search Console too – the Enhancements>Mobile Usability report flags up pages with mobile issues, such as clickable elements being too close together or content being wider than the screen and not displaying properly.

Page load speeds

People are time poor so they want websites that they can access quickly and easily.

Every second a web page takes to load means more lost visitors and higher bounce rates. According to research by Think with Google, if page load time goes from one second to 10 seconds, the probability of bounce (i.e. someone leaving your site from the page they land on without visiting any other pages) increases by 123%.

Google agrees and has confirmed that page speed is a ranking factor, including for mobile searches. It tells us, “People want to be able to find answers to their questions as fast as possible”.

If your website is packed with high quality, unique content then it’s unlikely that it would be penalised by Google for slow page speeds alone. However, a high bounce rate with low dwell time caused by frustrated visitors leaving your site is likely to have a detrimental effect on your rankings long-term.

Knowing this, it’s important to ensure that your web pages display fully as quickly as possible.

How to check page load speeds

A good starting point is Google’s PageSpeed Insights free tool. To use the tool, simply enter the URL for the web page you want to test and hit Analyze.

This will bring up a traffic-light-style report with scores in red, amber or green, as well as detailed suggestions about steps you can take to shorten your page load speeds.

Click on the downward pointing grey arrow to the right of each ‘Opportunity’ or ‘Diagnostic’ for more detailed advice about what to do next.

Note: The Google PageSpeed Insights report now gives suggestions of plugins you can use to solve each issue if you have a WordPress website.

check page load speeds

Pingdom is another handy tool to help you check your page load speeds (it’s currently available on a free 14-day trial).

You could also try:

How to improve your page load speeds

There are many different actions you can take to boost your page load speeds, depending on what your research using the above tools reveals.

This might include:

  • Enabling file compression to reduce your file sizes
  • Optimising your images to make them smaller
  • Minifying your CSS or JavaScript files
  • Removing render-blocking JavaScript
  • Reducing the number of HTTP requests
  • Reducing the number of redirects on your site
  • Leveraging browser caching
  • Improving server response times

If you’re new to technical SEO, the above list might sound like it’s been written in a foreign language!

There’s a handy guide from Manchester Digital Design into page load speeds that explains all of the above points in more detail, as well as what to do about them and where to find more information.

Site hierarchy

The better the structure of your website, the more likely it is to offer a good UX and be ranked highly by search engines.

In fact, Google now rewards really well structured websites with sitelinks in searches.

This is when, in addition to a link to your main website, the search results show six further links to key pages on your site indented below the top entry. Here’s an example:

Site hierarchy is another technical SEO factor

Sitelinks promote trust and authority and dominate the top half of page one of Google. The only way to achieve a listing like this is to have a strong, logical and user-friendly site structure.

But how can you make your site structure as strong as possible?

  1. Plan the structure of the website – what pages will be accessible from your Home page? What categories will your blog cover? What sub-categories will you need? Your aim should be to create a structure that’s logical and easy to follow.
  2. Create a URL structure that follows your hierarchy.
  3. Navigation should be coded in HTML or CSS to make it easy to crawl.
  4. Make sure that every page on your site can be accessed within a maximum of three clicks.
  5. Make sure the main navigation bar on your site header links to all of your key pages.
  6. Link all related content on your site together using internal links.

You can find more guidance about each of the points above from Neil Patel.

XML site map

In its guide to XML Sitemaps, Oncrawl describes sitemaps in XML format as ‘the swiss army knife of technical SEO’.

Using the sitemaps.org protocol, an XML sitemap is a file on your website that lists all of the pages that are available for crawling. The file also contains additional metadata about each URL, such as when it was last updated and how important it is in relation to other URLs on the site.

Google allows you to submit a new sitemap via Google Search Console each time you have new URLs on your website that you would like the search engine to crawl.

Ideally, all of the pages on your websites should be easily accessible within a couple of clicks, with related content linked to help visitors.

If, however, you have pages on your website that are buried deep within your site architecture, then an XML sitemap is an effective way to help search engines locate, crawl and index those pages.

An XML sitemap is also helpful for:

  • De-indexing multiple URLs
  • Handling duplicate content (more about this topic below)
  • Establishing international content
  • Searching for orphan pages (i.e. pages that have become unlinked to your main site structure)

If you have a WordPress website, you might want to use the Google XML Sitemaps plugin.

robots.txt

The robots.txt file is created by webmasters to instruct web robots, such as search engine robots, how to crawl a website, including what pages should or shouldn’t be crawled.

That being said, if a search engine finds enough links to a page, it may index and return it in search results despite what the robots.txt file says.

A better option might be to add a ‘noindex’ tag if you have a page that you don’t want to appear in searches.

Yoast has created an ultimate guide to robots.txt that you might find helpful further reading. They also have a guide to robots.txt best practice.

Check out the comprehensive guide from Moz that takes a deep dive into this topic.

Not sure whether you have a robots.txt file on your website? Simply type in your domain address and add /robots.txt – e.g. seo-plus.co.uk/robots.txt – and this will display the file.

The links you use on your website – whether they’re outgoing to other sites or they link pages internally – are a way of connecting content and giving visitors a clear path from A to B to C and so on.

For this reason, a broken link represents an unexpected obstacle on that journey, a road block that can damage the UX and how search engines crawl your site.

Broken links can occur for many different reasons:

  • The destination URL is deleted
  • Content is updated and moved
  • The link has a typo in the URL
  • The site structure is changed

It’s important to keep an eye out for broken links so that you can correct or remove them.

If you use Chrome as your browser, you can install the free ‘Check my links’ extension to check a page on your website at a time. Simply visit the URL you want to check and hit the icon to the right of the address bar.

Check links

This will bring up a colour coded list (see above) but also highlight all of the links on the web page. Broken links are clearly marked in red.
For a site-wide look for broken links, you can use the Screaming Frog SEO Spider (free or paid-for versions).

Wordstream has published a helpful guide to finding and fixing broken links – this includes using tools such as Google Analytics and Xenu Link Sleuth.

HTTP error messages

Sometimes things go wrong when a person tries to visit a web page. When this happens, the visitor will see an HTTP message from the web server highlighting the problem.

Each message or code denotes a different problem. According to Google, the most common HTTP error codes are:

  1. HTTP Error 500 (Internal Server Error): The most common HTTP error of all, this is a general purpose message for when a web server encounters some form of internal server error. A visitor might see this when your website receives higher than usual traffic volumes because the server is struggling to handle the requests properly.
  2. HTTP Error 403 (Forbidden): A bit like a 401 Error (see below), a 403 Error occurs when someone tries to access a forbidden directory on your website. In this case, there will never have been the option to login. The directory may be hidden to public view.
  3. HTTP Error 404 (Not found): This error appears when a URL no longer exists, often because the content has been deleted or moved. See ‘404 errors’ below for more advice about dealing with this issue.
  4. HTTP Error 400 (Bad request): This message comes from the web server to tell visitors that the application they’re using (e.g. the web browser) accessed the URL incorrectly or that the request was somehow corrupted on the way. You can read more about the kind of issues that might cause a 400 error here.
  5. HTTP Error 401 (Unauthorised): This happens when the visitor tries to view a page that they’re not authorised to see, typically because of a failed login.

Dealing with 404 errors

A 404 error occurs when someone tries to visit a URL on your website that doesn’t exist. They will see a ‘Page Not Found’ message.

Most often, this error is the result of a page that’s been deleted or moved or a misspelled URL.

While Google says that 404 errors are ‘a perfectly normal part of the web’, they can still damage the UX and, thereby indirectly, impact your rankings.

If you do have 404 errors on your site, it’s worth investigating them further and doing what you can to fix them.

The easiest way to identify 404 errors is via Google Search Console. Open the Coverage Report for a list of any errors on your site. Click on each error listed to see which pages are affected.

Create a bespoke 404 ‘Page Not Found’ page

Inevitably, a 404 error will occur on your website at some point. Rather than resulting in a frustrating dead end for your site visitors, you can adapt the page to keep people moving through your site.

Optinmonster has some fantastic examples of 404 pages that ‘turn lost visitors into loyal customers’. There are also some wonderfully creative examples on the Hubspot blog.

As you’ll see, most of these bespoke pages simply take visitors back to the site’s Home page but they do it with humour and/or the brand voice, which is far more powerful than a generic message.

Duplicate content

Duplicate content is content that appears on the Internet in more than one place, i.e. on more than one web address.

It’s estimated that some 29% of the web comprises duplicate content.

There are legitimate reasons for duplicate content to appear, such as:

  • Common details shared by multiple products or services on your site, e.g. you sell the same product but in different colours or sizes
  • Many different sellers using the manufacturer’s product description on their e-commerce sites, leading to duplicate content across many unrelated sites
  • Syndicated content that was published on your site originally but you have subsequently published on other sites

Duplicate content often comes about by accident too. This could be due to multiple URL variations, often generated by session IDs or printer-friendly versions of content.

If you have ‘www.example.com’ and ‘example.com’ versions of your site, this can also lead to entire sites being duplicated.

However, duplicate content can also come about because of plagiarism, i.e. someone copying content and passing it off as their own.

Duplicate content also presents a problem from an SEO perspective for three main reasons:

  1. Search engines don’t know which version they should index
  2. It’s impossible for search engines to know how to direct the link juice. Should it go to one page? If so, which one? Or should it be diluted across multiple pages?
  3. Search engines don’t know which version to rank in search queries

These issues will potentially result in your pages underperforming in searches because of lower rankings.

Dealing with duplicate content

Siteliner is a handy tool to help you identify potential duplicate content on your website.

If you think other websites may have plagiarised your content, you can check using Copyscape or even copy a portion of text into Google to see what comes back when you hit the search icon.

If you do find duplicate content, there are several potential ways to deal with it. The most common are:

  • Apply a 301 Redirect, which sends all traffic for the copied page to your chosen URL
  • Use a Rel=”canonical” attribute in the HTML header of all duplicate pages pointing to the URL of the original page – this tells search engines that they’re looking at a copy and that all link juice, authority, etc. should be applied to the original
  • Use the Meta Robots Noindex tag, which asks search engines not to index the duplicate page
  • Set the preferred domain of your site, e.g. https://example.com instead of https://www.example.com, in Google Search Console

Moz has published a helpful guide to dealing with duplicate content on your site, which walks you through each of the options above.

Once again, if you have a WordPress website, there are a number of plugins that can help you manage duplicate content or deal with 301 Redirects.

Site security (HTTPS/SSL)

When launching its HTTPS Everywhere campaign way back in 2014, Google highlighted that it would be prioritising secure search to protect the privacy of web users against malicious attacks.

It also confirmed that HTTPS would be a ranking signal, albeit a ‘lightweight’ one.

Since 2018, the search engine has warned searchers any time they may might be about to visit a non-secure website.

non-secure website

Non-secure sites may experience higher bounce rates or traffic below their potential, simply because visitors are discouraged from entering the site.

You can tell if a website is secure because it will have HTTPS instead of HTTP at the beginning of the web address and/or a padlock icon.

Site security Https

This shows that the site uses Secure Sockets Layer (SSL) to secure traffic between browsers and web servers.

SSL Certificates can be obtained and activated for free and will make your website secure. If your website is hosted by a third party, they would usually be your first port of call as they may be able to obtain and activate the certificate on your behalf.

If your website still isn’t using SSL, it would be advisable to change this ASAP. It may not have a huge impact on your rankings but it is an important signal to your potential customers that your website can be trusted.

You can read my guide to converting to HTTPS here.

Schema markup/Rich snippets

As I explained in my Schema made easy – beginner’s guide, web page features such as reviews, recipes, ratings, events, and product descriptions are easy for human visitors to recognise but search engines can find them harder to decipher. This is where structured markup comes in.

It’s a special language of tags that go into the HTML code of a web page. These tags tell search engines what specific elements on the page are, as well as what they mean.

As well as helping search engines to better understand the content of web pages, Schema markup can also help to highlight special bits of information for Google to feature about your website in search engine results pages (SERPs).

These extra pieces of information are known as ‘rich snippets’.

Here’s an example:

Schema markup/Rich snippets


A search for ‘vegetarian lasagne’ brings up a SERP that features pictures of highly rated recipes from various sources. We can even see how many people have reviewed the recipes, the star ratings, calories per portion, cooking times and more without having to click on the link for information.

Google is able to collect this information because of Schema markup.

Although machine learning means that Google is getting better at recognising data such as star ratings and reviews, it’s still advisable to use Schema markup on your website to highlight and add context to any important information.

If you have a WordPress website, there are some excellent plugins designed to make adding Schema markup and rich snippets as easy as possible.

A quick recap

As we’ve seen, technical SEO provides the foundation on which you can build a great UX for your website visitors.

Fast load speeds, mobile-friendly design, a logical hierarchy with clear navigation, security and a hassle-free journey through your pages can only serve to enhance your content and design.

Technical SEO is a huge topic; it’s also ever-changing. Even if you don’t have time to read all of the resources I’ve linked to in this guide, hopefully some of the information I’ve provided will help you to identify ways in which you can fine tune your website’s performance.

A good starting point is to keep an eye on Google Search Console for any potential technical SEO issues affecting your website. Google’s PageSpeed Insights and mobile-friendly test are both helpful non-techy tools too.

Need more hands-on help?

Join my 12-week SEO Accelerator programme, which includes 10 modules with step-by-step instructions, worksheets and support to cover the exact process of getting your website ranked highly on Google.

This covers on-page, off-page and technical SEO tips, all designed to turn your website into a customer attraction machine.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

SEO+