Experts share the Technical SEO problems that stop a website from getting the traffic it deserves

I asked some SEO experts about technical SEO problems they see while working in the trenches that stop a website from the getting the rankings they deserve. Here are the answers:

JP Garbaccio, Sr. SEO manager, From the future agency

Technical SEO issues are always rife, and often the biggest fixes come from a unique, nuanced problem that is only understood through both a commercial and technical lens.

From The Future’s unique experience as a performance marketing agency has provided much exposure to these, and we’ve chosen to niche down our advice for eCommerce / D2C websites – where technical SEO issues are plentiful.

Let’s briefly explore some of the top technical SEO issues that we find in eCommerce websites.

Non-indexable faceted navigation in the sidebar

Categories are to eCommerce websites what content is to blogs. The more categories you have that are related to your website identity, the greater your topical coverage and ability to rank.

Sidebars present a unique challenge here, as systems like Magento 2 and Shopify don’t have native functionality to produce SEO-friendly, static URLs. Usually, these are parameter URLs that are different from the main category.

This either creates a parameter URL that cannibalizes the main category, or, creates a functional URL that often isn’t indexed in Google if parameters are excluded, and can’t be found through search.

Plugins like Amasty’s Improved Layered Navigation on Magento 2, or custom edits to Shopify’s Liquid files can achieve this. WooCommerce has few ways to achieve this, without heavy development intervention.

Javascript rendering that is not cached by Google.

eCommerce websites rely heavily on external plugins and modules, often rendered through Javascript or APIs. These are responsible for injecting value-adding widgets into category and product pages like reviews, UGC or immersive experiences.

Depending on how these are implemented, the Javascript won’t be parsed or cached by Google. In instances where this might add a lot of value for Google (like reviews), it’s important to render this as static HTML / CSS so it can be cached effectively.

Oftentimes, this requires server-side rendering using custom code or solutions like Prerender, which can make this code easily parsable and crawlable for Google.

Incorrect hierarchy architecture not aligned with Google’s taxonomy.

Hierarchy architecture and engineering is crucial for topical coverage and authority.

The distance between the root and categories carries semantic meaning that is carried from parent levels down to child levels. Therefore, it’s important that the hierarchy of the website reinforces a clear ontology that is representative of the niche that the website operates in.

This can be aligned with Google’s Ads taxonomy, which is developed by Google to help us understand how they organize and structure niches for their Ad’s algorithms. Understanding this classification can bolster organic performance when done correctly.

Canonicalization issues with categories / products

Similar to Shopify’s infamous ‘collection’ in product URLs, is the presence of multiple URLs that represent the same page, but are usually canonicalized back to a single page.

This can exist across multiple platforms, and can apply to categories, products and filter URLs – and, creates bloat on large eCommerce sites and marketplaces.

The best practice here is to audit pages that have the same intent but sit on different URLs, and understand technically how these can be reduced to a single page so that intent is extremely focused and search engines are not forced to deal with cannibalization issues.

Ben, My Singapore Driver

One technical SEO error that hindered my website was not having redirected all the 404 errors.

So I do have some projects with expired domains which I rebuilt into actual websites to monetize however, for the first few sites I was not aware of the potential 404 errors from old blog articles that were not recovered at all. As I was focused on rebuilding the pages with traffic and backlinks, the older pages started to surface. They had many broken “URLs” so the site didn’t actually rank much or get much traffic until I started fixing the redirects with plugins such as 301 redirect.

I made use of the 301 redirect plugin to give me weekly reports of broken links and attempt to fix them if possible – because Google also asses the site based on the user experience, if there are many broken links it will then signal to Google that the site is not well maintained and will be less likely to rank better than a site that has a better user experience.

The other issue that I faced was internal linking. Having too many orphaned pages can also signal to google that there is a mismatched in content or generally bad user interface and experience. The content needs to be linked in a well-thought-out network.

Previously, when I was an SEO newbie, I would just focus on creating articles and content without any internal links of content. Some content managed to rank but most just suffered until I started internal linking them to pass on the link juice especially to the hub page where I see a massive increase in rankings and traffic.

Oleg K

A website’s internal navigation structure is super important for search engines to understand which pages are and aren’t important (the more prominently you link to any page on your website, the more internal authority flows to it, the more likely it is to rank for its target queries). But what if search engines can’t see your navigation?

I’ve seen this happening quite frequently with dynamic hamburger menus which only generate navigational links after someone clicks on the menu icon — something crawlers won’t ever do. The result is your most important pages underperform, or even worse, crawlers can’t discover entire sections of your website.

To avoid this issue, you need to make sure that the <a> links are visible in the source code of the page in mobile view. Ideally, they appear in the response HTML, but at the very least should be present in the rendered HTML before opening up the mobile nav menu.

Kevin Wiles

When addressing technical SEO issues, it’s crucial to understand that different types of websites, such as e-commerce vs lead generation platforms, & the challenges they may face. However, several common technical errors can impede website traffic across various niches:

Unoptimized Navigation and Internal Linking: A key issue often seen is an unoptimized navigation structure coupled with poor internal linking. This leads to orphan pages – pages that aren’t linked to from other parts of the site. As a result, these pages don’t receive link equity and struggle to rank for targeted search terms, especially long-tail, commercially focused ones.

Lack of Focus on Google’s Crawling and Indexing: Another critical oversight is not paying enough attention to how Google crawls and indexes a site. Feedback from Google Search Console, such as “crawled but not currently indexed,” often points to issues with content relevance or quality. Ensuring that a landing page’s content aligns with the targeted keywords is vital for indexing and ranking.

Slow Page Load Speed: Slow-loading pages can significantly harm SEO performance. Page speed is a ranking factor, and slow pages offer poor user experiences, leading to higher bounce rates and lower rankings.

Improper Use of Canonical Tags: Incorrectly implemented canonical tags can lead to significant SEO issues, including confusion about which pages to index and rank.

Broken Links and Redirects: Broken links and improper redirects (like 302 instead of 301 redirects) can harm a site’s SEO by disrupting the user experience and link equity flow.

Lack of Structured Data: Failing to implement structured data (Schema markup) can result in missed opportunities for enhanced search results appearances, which can impact click-through rates.

Ramin Assemi

2 technical SEO errors I’ve seen often preventing sites from getting the traffic they deserve:

1) Redirect chains after multiple CMS migrations.

Oftentimes when switching a CMS, you’ll need to implement different URL structures (e.g. when we migrated Close.com to Webflow, we found that we had to restructure a lot of our content because specific design templates only worked in folders, which negatively affected some of our clusters. Shopify is another CMS which often brings its own set of issues with folder-structure.) We also had once migrated domains from the .io to the .com.

There were still historical artifacts of this on our site, and some 7-step redirect chains. You can use tools like ScreamingFrog, Sitebulb, or free tools like redirect-checker.org or the Redirect Path Chrome extension by Ayima to find redirect chains on your site.

2) Not updating internal links. 

The older and larger your site, the more outdated internal links typically accumulate. You want to run an internal link audit at least once a year. 

In the worst case, outdated links lead to 404 error messages, which is bad for both user experience and search engines. In the best case, you’ve set up 301 redirects, but even then it’s worth updating your internal links.

While it’s true that 301 redirects pass on almost all the linkjuice, you should still update your internal links to a page, rather than keep them pointed at a redirect. These things can accumulate over time, eat up too much of your crawl budget, and slow down site performance. If you’re dealing with a large site, ideally find a way to update your internal links programmatically. (You create a spreadsheet where you list the original link, and the link you want to replace it, and run a script which will replace these in your CMS). But if that’s not possible, then it’s well worth hiring a VA to update those links manually.

Justin Herring, Yeah Local

While tag and archive pages can be helpful for user navigation, they often contain thin content and duplicate snippets of content from individual posts.

This can lead to several technical SEO problems:

1. Crawl Budget Waste:

Search engine crawlers have limited resources (crawl budget) to explore and index websites. Not noindexing tag and archive pages can lead to them spending their crawl budget on these low-value pages instead of focusing on more important content like individual posts or product pages.

2. Duplicate Content:

Tag and archive pages often display excerpts or summaries of individual posts, leading to duplicate content issues. This can confuse search engines and potentially harm your website’s ranking for the original content.

3. Keyword Cannibalization:

Both tag and archive pages often target the same keywords as individual posts. This can lead to a phenomenon called keyword cannibalization, where different pages on your website compete with each other for the same keyword ranking, diluting your overall SEO performance.

4. Reduced Index Efficiency:

Indexing a large number of thin and duplicate tag and archive pages can bloat your website’s index. This can make it harder for search engines to understand the structure of your website and identify the most important content.

5. Potential Penalties:

If Google identifies a significant amount of thin or duplicate content on your website, it may penalize your website in search results.

Here’s an example to illustrate the problem:

Imagine you have a website selling shoes. You have a blog post titled “Best Running Shoes for Women” and a tag page for “Running Shoes.” Both the blog post and the tag page would likely contain the keyword “running shoes.” If you don’t noindex the tag page, it could compete with the blog post for the same keyword ranking, potentially lowering the ranking of both pages.

When to Noindex Tag and Archive Pages:

Noindexing tag and archive pages is generally recommended in situations where:

  • They contain little or no unique content.
  • They are dynamically generated and may contain duplicate content.
  • You have a large number of tag and archive pages that are not valuable to users.

How to Noindex Tag and Archive Pages:

There are two main ways to noindex tag and archive pages:

  • Robots.txt: You can add a directive to your robots.txt file to tell search engine crawlers not to index specific pages or directories.
  • Meta Robots Tag: You can add a meta robots tag to the HTML header of your tag and archive pages to tell search engines not to index them.

Nestor Vazquez

Hey bro, not sure if I am still on time for this but here is my big discovery for last year.

Error: Showing the wrong URL in one country

For multinational projects, multiple countries with the same language

Do not confuse Google: one go-to for multiple companies is to have every project in each subfolder just by implementing correctly

hreflang

. Even with the right setup, Google can still display incorrect URLs in certain countries. For example, in the search results for Mexico, I can see URLs from Argentina. 

Some fixes for this case.

Pruning and Cookie Level Implementation

I removed duplicate URLs by using technology that tracks the user’s country based on cookies.Then it generates by itself a URL like if the real URL exists. For the user experience, it also shows a flag with the user’s country, for example, El Salvador.

Double Check Canonicals

Verify if your CMS shows correctly canonical; this is a big topic to discuss for an entire article. In my case, Rebel Mouse for a multinational project has done a great job.

Nikola Roza

One technical SEO I encountered recently is all my blog posts getting duplicated and indexed with a URL that ends with “?”. 

2 example are:


The second URL’s are pure duplicate content that Google viewed as unique thanks to that question mark after the trailing slash.

 

This mass content duplication severely brought the quality of my site down

And right before the Core and Google Helpful Updates too. These were unhelpful pages, and I believe they contributed to my site getting slightly dinged during those recent updates. 


I solved the problem by adding a directive in my .htaccess file telling Google that those non canonical pages that were getting indexed are duplicates that redirect to the canonical versions (ending with a trailing slash and no question mark).

Sure enough, 2 week later hundreds of those faulty URL`s disappeared from Google’s index and now all my blog posts which were affected by this issue perform markedly better. 

Easy technical SEO fix that resulted in substantial traffic gain. I wish there were more of these easy SEO wins.

 


Blake Smith, Australian SEO consultant

In my professional experience, a common yet often overlooked technical SEO error involves how industry sites manage PDF reports and studies. Many of these sites accumulate substantial backlink equity through PDF versions of their reports. However, this is where the opportunity is frequently missed. PDFs are not as SEO-friendly as HTML pages.  on the website.

By recreating these PDF reports as HTML pages and implementing canonical HTTP headers, websites can significantly amplify the utility of their backlinks. This helps search engines understand that the HTML page is the ‘master copy’, thus ensuring the link equity from backlinks to the PDF is effectively passed on to the website. 

This approach not only improves the site’s overall SEO performance but also enhances user experience, as HTML pages are generally more accessible and interactive compared to static PDFs.

Dan charles, Codarity

One of the biggest challenges we encounter regularly is websites that don’t have Hreflang set up correctly.

This causes HTML lang attribute and annotation issues. These technical issues directly hurt the site’s visibility if they target international audiences or deploy multiple languages on their site.

Often, this can lead to canonicalization issues, which hurt the site’s crawlability and make it much harder for search engines to find the valuable and unique content the brand creates.

The issue can then compound on itself and create broken links, broken images, broken redirects and duplicate content too.

Previously, clients were trying to fix these individual errors in isolation (80% of the problem).

But we’ve found that isolating the real problem can effectively eliminate the need for such resource-intensive work, by focusing on the (20% of issues) that are the root cause instead.

We’ve seen almost immediate jumps in traffic after resolving these issues and then a much more scalable framework for site growth in the long term when these key issues are resolved as a priority.

S

Leave a Reply

Your email address will not be published. Required fields are marked *

Malcare WordPress Security