top of page

151 items found for ""

  • Polemic Digital backs Glentoran FC for the 2018/19 season

    Polemic Digital’s logo features on the back of Glentoran’s home and away shirts for the 2018/19 season While Polemic Digital works with clients across the globe such as News UK, Seven West Media, Fox News, and Mail Online, we’re a key part of East Belfast’s thriving business community and have been based in the City East Business Centre since our inception. Last month we agreed a partnership with Glentoran FC, the iconic East Belfast football club, to become one of the club’s sponsors with our company’s logo featuring on the back of the players’ 2018/19 shirts. Commenting on this partnership with Glentoran FC, Polemic Digital’s founder Barry Adams said: “The people and businesses of East Belfast have supported and inspired us to take pride in working hard and achieving great results. Glentoran embodies this spirit of teamwork and the will to win against the odds. In our conversations with Simon Wallace we quickly recognised the kindred spirit shared by Glentoran and Polemic Digital, and we’re proud to become part of the club’s long and celebrated history.” Glentoran’s Simon Wallace pictured with Barry Adams from Polemic Digital at The Oval. Simon Wallace, commercial manager at Glentoran FC, added: “It’s great to have a successful local business like Polemic Digital partner with Glentoran and support our club. As a small local firm, Polemic manages to punch above its weight locally and internationally, which mirrors the drive and ambition that Glentoran FC has shown throughout the years.” “We’re looking forward to a great season in the league,” Barry Adams continued. “Glentoran is such an iconic club, we’re fully behind the team and hope for a successful season.” Read more about Polemic Digital here, and visit the Glentoran website at www.glentoran.com.

  • Deconstructing Google’s “Users First” Statement

    As part of the ongoing battle between Google and EU regulators, Eric Schmidt published an open letter on the Google Europe blog arguing that Google is built for users first, not for publishers. Ignoring the blatant lie in that statement – nowadays Google is built first and foremost for advertisers, their only real customers – if we take Schmidt’s statement at face value, it’s a horribly naive position for them to adopt. In most industries, putting users first is perfectly sensible. Manufacturers especially should always take their users best interests to heart and design their products accordingly. Google, however, is not a manufacturer. Google is an intermediary, connecting users with information sources. And, as any intermediary, it has a responsibility towards both sides of their value proposition. If we take the examples of other intermediaries, we immediately recognise the need for these companies to perform a balancing act between what’s best for their users and what’s best for their suppliers. The Balancing Act Supermarkets are intermediaries, connecting consumers to manufacturers. A lot of supermarkets want to do what is best for their users – low prices – but need to balance this against the needs of their suppliers. If supermarkets like Tesco err too much on the side of consumers, the end result is a lower quality of product. It’s an inevitable outcome in their ecosystem, where continued price pressures on manufacturers urge them to drive down costs at the expense of all other concerns, which in turn leads to lower quality labour and ingredients, and eventually a lower quality product. The same is true in the travel industry. Intermediaries like Expedia want to offer their users the best possible deals, but at the same time hotels and airlines want to maintain a decent standard of service. It’s a tight balancing act, and one that forces these intermediaries to weigh the benefits of their users against the needs of their suppliers. In the travel industry, if there’s too much downward pressure on price, hotels and airlines can opt out of Expedia’s intermediary platform and try to win customers through other channels. This is a sensible option in many other intermediary ecosystems, which helps keep the intermediary platforms honest. In Google’s case, however, their near-monopolistic dominance in Europe ensures it is commercial suicide for companies to opt-out of Google’s intermediary platform. The resources required to win customers outside of Google’s ecosystem is vastly beyond the limits of most organisations. Google’s Unique Intermediary Economy On top of that, Google has a rather unique position in that it has free access to the source of their offering. Due to the free and open nature of the web, Google’s ‘suppliers’ – the very publishers Eric Schmidt is criticising in his letter – have to opt-out of Google’s ecosystem. By default the entire web is Google’s supply chain. In years past, the value proposition was clear for publishers: Google takes their content for free, and in return Google sends a lot of visitors to the publishers’ websites. However, increasingly Google has skewed this value proposition in its own favour. Instead of sending users to the publishers’ websites, Google aims to keep them on their own properties so it can harvest more data from them and use this to make more money from their real customers (i.e. advertisers). It is precisely this unbalancing of the ecosystem that publishers are riling against, and that the EU regulators want to address. The Wrong Perspective The problem is, at its core, that Google doesn’t see itself as an intermediary. Google sees itself as a manufacturer (of advertising platforms), and is loathe to take the needs of their suppliers in to account in any decision it makes. This is problematic on many different levels. First and foremost, if Google continues to deny its place in the internet ecosystem as an intermediary, it will continue to monopolise users’ online behaviour to improve their value for advertisers and thus secure their growth. As a result their suppliers – online publishers – will be forced to drive down costs, with all the negative repercussions. Many will go out of business entirely, which in the long run undermines Google’s own service as it will have less content to serve to its users, and the content it does serve will be of lower quality. And it’s not just publishers that Google is putting under pressure. Almost every online industry is already, or will be at some stage in the future, subject to Google’s ‘users first’ mantra when it decides it can offer a better service. We already see Google moving in to verticals such as local, travel, and finance. Eric Schmidt’s ‘users first’ statement is testament to the incredibly naive mindset that pervades Google. The company still believes at its core that it’s a small start-up, fighting the good fight against the establishment. This is profoundly and wilfully ignorant. Google has long since ceased to be a small start-up, and is in fact a monstrously dominant powerhouse capable of causing untold destruction to every industry it touches. The company’s failure to even remotely appreciate this fact should be a grave cause for concern for everyone that cares about the web’s egalitarian promise.

  • How To Get In To Google News – My Moz Whiteboard Friday

    I was on an extended trip through the USA late last year, with stops in New York, Las Vegas, and Seattle. The first two were primarily for work but the latter was mostly for relaxation. Nonetheless, when you’re an SEO and visiting Seattle you should always make an attempt to visit the Moz offices. Moz is probably the most famous SEO-focused company in the world, and their blog has been setting the standard for excellent SEO content for years. Every Friday, Moz publishes a short video in which a particular aspect or concept of SEO is explained. These are their so-called ‘Whiteboard Friday’ videos, because the format is a presenter in front of a whiteboard. Simple yet highly effective, and widely copied as well. When I visited the Moz offices last year, the lovely folks there asked me if I’d like to record a Whiteboard Friday. That was a bit of a no-brainer – I jumped at the opportunity. The topic I chose is one that doesn’t get covered often in the SEO industry, and is close to my heart: how to get a news site included in Google’s separate Google News index. The recording went smoothly and the Moz folks did their usual post-production magic before it was published on their site earlier this month. So here then is my Moz Whiteboard Friday – How to get in to Google News: 1. Have a dedicated news site A subsection of a commercial site will not be accepted. Ensure your site is a separate entity focused entirely on providing news and background content. Having multiple authors and providing unique news-worthy content is also highly recommended. 2. Static URLs for articles and sections Google wants your articles and section pages to remain on the same URLs so that they can be recrawled regularly. A new URL means a new article for Google News, so if your article URLs change then it can cause problems for Google News. 3. Plain HTML Due to the speed with which news changes, Google News only uses the first-stage indexing process in Google’s indexing ecosystem. As such, it’s important that your entire article content is present in the HTML source and doesn’t require any client-side code (such as JavaScript) to be rendered. Furthermore there are some technical aspects that are not required but strongly recommended: Separate news-specific XML sitemap for all your news articles published in the last 48 hours. (News)Article structured data to help Google index and categorise your articles quickly. Lastly, if your news site covers a specific niche or specialised topic, that tends to help with being accepted in to Google News. There are plenty of general news sites already, and Google News doesn’t really need more of those. Specialised news sites focusing on a specific niche will help broaden Google News’s scope, so you’ll find it a bit easier to get in to Google News when your site has such a focus. Make sure you watch the full video on the Moz blog, and give it a thumbs up there if you enjoyed it.

  • Preventing Saturation and Preserving Sanity

    Over the past few years I’ve spoken at a lot of conferences. I’m not quite as prolific as, for example, the amazing Aleyda Solis, but there have been significant periods where I spoke at an event least once every month. I enjoy speaking at conferences. A large part of my enjoyment comes from sharing my knowledge and meeting with people in the industry. I get to hang out with old friends and make new ones, and the privilege of going up on stage to have hundreds of people listen to me is one I never take for granted. Thanks to conferences I’ve been able to travel to amazing places and meet up with awesome people. The past few years I’ve travelled to cities like New York, Las Vegas, Paris, Istanbul, Milan, Bonn, Amsterdam, and numerous places in the UK and Ireland – all thanks to events I was invited to speak at. But I also dislike going to conferences. The travel is never fun (I’m a grumpy traveller at the best of times), I rarely get a good sleep in hotel beds, and my nutrition takes the usual hit. I also feel a lot of pressure to deliver a good talk, one that entertains and informs and is hopefully worthwhile and unique. And then there’s the socialising bit. At heart, I’m an introvert pretending to be an extrovert. I’m not great at socialising but I make an effort, because I do enjoy hanging out with people I like – and fortunately the SEO industry has plenty of fun people to hang out with. I’ve made several great friends in the industry over the years, thanks to conferences and the surrounding social activities. But there’s only so much I can handle. My reservoir of social interaction is limited, and conferences drain that reservoir very quickly. I’ve been very lucky that my wife and business partner Alison joins me at many events, and helps make socialising so much easier for me. Contrary to me, she actually likes people in general and enjoys chatting to new folks. She’s been an incredible support for me over the years as our business has grown and my conference speaking gigs became more numerous and more international. All in all, despite the fun bits and all the support I’ve received, it’s been taking a toll on me. The travel, the lack of sleep, the pressures of delivering, the socialising, and of course the time away from actual paid work – speaking at conferences comes at a price, and it’s one I’m increasingly reluctant to pay. I’ve already agreed to a number of events for the remainder of 2019, and I’m genuinely looking forward to each and every one of these: Optimisey Cambridge SEO Meetup SMX Munich BrightonSEO eComm Live The Tomorrow Lab Presents Digital Elite Day Digital DNA SearchLeeds Nottingham Digital Summit State of Digital Chiang Mai SEO Some are events I’ve never spoken at but have wanted to, and others are recurring events that I always enjoy being a small part of. So I’m committing to these events and will work damn hard to deliver great talks at every single one. After that, I’m pulling on the brakes. For a long time I felt that speaking at conferences was a way to prove myself, to show that I knew my stuff and wasn’t half-bad at this SEO malarkey. The bigger the stage, the more I felt affirmed in my knowledge and experience. That aspect of it has lost its luster for me. I don’t feel I’ve anything left to prove. I’ve become increasingly confident in my own abilities as an SEO, and feel I’ve gotten a good handle on my imposter syndrome. Also, I sometimes feel that by speaking at a conference I’m taking up a spot that could’ve gone to someone else, someone who is still building their reputation or who has more worthwhile content to share. And, let’s be honest, there’s enough white guys speaking at conferences. If I take a step back from the conference circuit, maybe that’ll allow someone else to take a step up. So from now on I’ll keep my speaking calendar a lot emptier. I’m not retiring from the conference circuit entirely – I enjoy it too much – but I’ll be speaking much less often. I’ll be on stage at a small handful of events every year at most, and mainly outside of the UK (with one or two exceptions). This will hopefully free me up to focus on my paid client work, as well as my SEO training offering. And I’ll keep showing my face at events like BrightonSEO, as for me those feel more like regular SEO family gatherings. It’s a selfish move of course, to prevent my name from saturating the conference circuit as much as preserve my sanity. I feel I’m at risk of losing appeal as a speaker, as there’ve been so many opportunities to see me speak. Maybe by enforcing some scarcity, I’ll stay attractive for conference organisers while also making sure I can deliver top notch talks at the few events I choose. But foremost I want to prevent burning out. I’ve felt quite stretched the last while, always running from one place to the next while trying to meet deadline after deadline. It’s time I slow down the Barry-train and focus primarily on my client work. Conferences are great fun but they also consume a lot of time and energy. Those are resources that I need to treat with more respect. I’ll hope to see many of you at the 2019 events still to come, and I’ll do my best to stay in contact with my industry friends. Conferences are a great way to keep in touch, but definitely not the only way. Some of our best industry friends have visited us in Northern Ireland, and I want to make time to do the same and visit our friends where they live. Those are the trips that don’t cost energy, but recharge the batteries. I need to do more of those. So, in short, I’m not going away, but I’ll become less ubiquitous. It’s win-win for everyone. :)

  • The Client-Agency Dynamic

    One of the most rewarding, frustrating, enriching and infuriating aspects of working in a service agency is dealing with clients. I often joke that my job would be perfect if it wasn’t for clients – but truth be told, I love our clients. Without our clients I would very literally not have a job. It’s that simple. But the affection I feel for my clients goes far beyond the sober realisation that they pay my salary. Clients enable me to work in this magnificent industry of ours, with all its beauty and creativity and companionship and cutting edge technology. Clients empower me to create amazing things and tell powerful stories. Clients drive me to constantly improve and deliver the very best work I and my team are capable of. Clients are, in a very real and tangible sense, the reason I do this job. And yet clients can also be excruciating to deal with, impossible to please, and obtusely difficult to communicate with. It’s that client-agency dynamic that makes agency life, in varying degrees, both profoundly awesome and mind-numbingly depressing. The Best Clients It’s easy to describe the best type of clients, because all the clients we love collaborating with have one thing in common: They understand digital. For us the best clients are those that have at least a passable understanding of what websites and digital marketing can do. Generally speaking we find that the more educated a client is on all things digital, the stronger and more successful our partnership with them is. This image is supposed to invoke happiness The more advanced a client’s understanding of digital marketing, the better for us, as invariably this will be a client that knows exactly where agencies can fit in to their overall marketing. More importantly, these clients have realistic expectations of what can be achieved, and a very clear framework within which our creative ideas can flourish. This actually makes them demanding clients as they simply won’t settle for anything but the best, but that is an expectation we wholeheartedly strive to live up to because these clients understand the value of quality work and will pay appropriate rates. The Worst Clients It’s equally easy to describe the kind of clients we’re now actively trying to avoid: clients that treat agencies like cheap labour. These are the clients that move the goalposts all the time and don’t expect to see an increased bill. These are the clients that call you up with inane requests and demand you to drop everything to fulfil their needs instantly. These are the clients that do not see an agency as a partner, but as a servant to cater to their every whim. Not a good conference call These types of clients are also very demanding, but in very different context. Rarely will these clients have an in-depth understanding of the digital realm, and never will they truly appreciate the value of an agency’s time and expertise. These clients often want front row seats for a dime and do not understand the concept of ‘billable hours’ – the very foundation of agency economics. Invariably, these are clients that end up costing you more than they pay. Spotting such clients in the prospect phase is hard, so you’ll probably end up with one at some stage. But once you get them to sign on the dotted line, they become easy to identify as these clients sap your team’s morale, are rarely willing to compromise, and consistently have that big red mark against their name when you compare hours billed to hours worked on their account. The Agency Proposition Several years ago when I was made the digital director at what is now The Tomorrow Lab, we made a conscious decision to focus on clients we want to work with. This was a logical result of what we defined as our foundational principle: attract and retain the very best talent we could find. For us, it all begins with great people. With great people who possess the right skills and, more importantly, the right attitude, we’re able to deliver awesome work for our clients. And to keep our talent happy and productive and engaged, we need to give them challenging work to do and wonderful clients to do it for. A hard-working agency team Bad clients that treat us like unworthy servants erode our people’s morale, diminish our creativity, and undermine our commitment to quality. Bad clients result in our best talent not feeling appreciated and valued, and thus increase our risk of staff turnover. For me, that is an unacceptable situation. We work hard to find the very best people to strengthen our team, and we don’t want to risk losing them. No single client is worth that. So for us, clients that treat us with respect and appreciate the value we add to their business are not simply wishful thinking – they are an absolutely crucial aspect of our agency’s success. We go out of our way to find those clients; businesses that want to excel in the digital realm and understand that an agency built on passion, commitment, and an uncompromising devotion to quality can help them scale those heights. [Image credits: Getty, Getty, Brand Etiquette]

  • My Digitalzone’18 talk about SEO for Google News

    Last year I was fortunate enough to deliver a talk at the Digitalzone conference in Istanbul. Among a great lineup of speakers on SEO, social media, and online advertising, the organisers asked me to speak about my specialist topic: SEO for Google News. In my talk I outlined what’s required for websites to be considered for inclusion in the curated Google News index, and how news websites can optimise their visibility in Google News and especially the associated Top Stories box in regular search results. You can view the recording of my entire talk online here: Since I delivered that talk in November 2018, there have been numerous changes to Google News – specifically to how Google handles original content and determines trust and authority. SEO for news publishers remains a fast-moving field where publishers need to pay constant attention to the rapidly evolving technical and editorial demands Google places on news sites. If you’re a publisher in need of help with your SEO, give me a shout.

  • Online Technical SEO Training Course

    I’ve been delivering my technical SEO training course in person for several years now. It’s been a very rewarding experience, with full classrooms and great feedback from the participants. Delivering these training courses in person has always felt like a competitive advantage. The interactive element of my training is part of the appeal. I encourage my students to ask any questions they want, either during the training, in the breaks, or after the session. I always try to set the ground rule that there is no such thing as a stupid question, and I want every participant to feel empowered to ask whatever they want to make sure they get maximum value from the training. We’ve toyed with delivering this training in an online format for a long time. Now that most of us are stuck at home, it feels like the right time to take the plunge and see if we can do this. So my technical SEO training is going online! I want to preserve the interactive element of my training as much as possible, which means it’ll be delivered live. No pre-recorded videos or anything like that, it’ll be exactly like a classroom except it’ll be done via Zoom. And instead of one long day, we’ll spread the training out over two half-day sessions. The first online training will be delivered on 27 & 28 August 2020 in two morning sessions (UK/Ireland time). The training content will be the same as my classroom training, though I might tweak it a bit to facilitate the online format – I expect I’ll be able to cover a bit more ground online, so I am likely to put some more content in to the training. Because it’ll be delivered live, we want to keep the number of participants limited to encourage interaction and make sure everyone gets maximum value from the sessions. So if you’re interested in the training, make sure you book your spot soon!

  • Barry’s Top SEO Tools

    Updated: 31 October 2022 One thing the SEO industry isn’t lacking is tools. For every SEO task there appears to be at least one tool that claims to be able to do it all for you. From site analysis to on-page optimisation, from outreach to content planning, you’ll never be short on tools to aid in your work. But tools can be a crutch, an inadequate replacement for real skill and experience. SEO tools are only as good as the SEO practitioner using them. There are hundreds of tools to choose from. Brian Dean at Backlinko has compiled a whole list of them – you’ll find dozens of tools there for every conceivable task. But here at Polemic Digital, I only use a handful of tools; a few tried and trusted platforms that, for me, deliver all the value and automation that I require. Google Search Console is such an obvious source of data that I won’t mention it here. Instead I’ll focus on my favourite 3rd party SEO tools that I use (almost) every day: 1. Sitebulb I used to rely exclusively on Screaming Frog as my desktop SEO crawler, but recently I made the switch to Sitebulb and haven’t looked back since. Don’t get me wrong, I still love Screaming Frog and use it often, but when it comes to running a first-look crawl on a website to find (almost) every potential technical SEO issue that might affect it, nothing beats Sitebulb. In addition to extensive built-in reports, which include everything from performance checks, structured data, site architecture, HTML optimisations, and much more, the tool also allows you to extract the raw crawl data so you can dig deeper into the site and create your own sheets and reports. It’s simply the best SEO crawler out there, bar none. 2. Screaming Frog Where I use Sitebulb for full site analyses, I use Screaming Frog for focused crawls – usually a specific list of URLs or an XML sitemap. The ability to connect Screaming Frog to 3rd party data sources like PageSpeed Insights, Google Search Console & Google Analytics, Majestic & Ahrefs, means that with Screaming Frog you can collect all the relevant data for a URL in one place and get this data for hundreds of URLs in one go. Screaming Frog is like the Swiss army knife of SEO. Its uses are almost endless! A true must-have tool in every SEO’s arsenal. 3. Sistrix For evaluating competitors, there’s one tool that stands apart from everyone else: Sistrix. With this tool you can get a very good snapshot of a site’s performance in search results, and compare that to those of its rivals. The data is excellent and will give you a reliable impression of a site’s footprint in search results, and any shifts associated with algorithm updates. You can also compare multiple websites, allowing you to see exactly where one is gaining at the expense of another. Sistrix also has a host of other features which can help with all kinds of other aspects of SEO, including site audits, keyword research, and rank tracking. I’ll be honest and admit I don’t use those, as I prefer specialised tools for those aspects of SEO. 4. Rank Ranger Over the years I’ve tried many different rank trackers. They all do more or less the same, so none really stood out. Until I tried Rank Ranger. Now this is to rank trackers what a space shuttle is to a kite. It’s much more than just a rank tracker – it’s a Google data gathering platform. Rank Ranger not only tracks a site’s rankings on Google for your keywords, it gathers pretty much every conceivable bit of information about those search results pages and allows you to generate reports for them. Plus it has a host of other features that make it an extremely powerful SEO suite. For my work with news publishers, Rank Ranger created a special Top Stories report that tracks a domain’s visibility in the top stories carousels on up to 50 keywords on a daily basis. I can honestly say that Rank Ranger has become my go-to rank tracker for every client project. 5. Little Warden When Dom Hodgson launched this tool in the middle of 2017 I was keen to give it a try, and I was instantly turned in to a lifelong fan. Little Warden is a monitoring tool that checks a domain and homepage for a huge range of technical aspects, such as: Domain name & SSL expiration Title tag & meta description changes Robots.txt changes Canonical tags Redirects 404 errors and many, many more. Little Warden sends you a notification every time something changes, so that you’ll never let a domain name expire or have a robots.txt disallow rule change pass unnoticed. You can configure the checks as well and choose which checks you want to enable or disable. So far Little Warden has been a huge lifesaver several times already, notifying me of potential problems such as expired SSL certificates, title tag changes, wrong redirects, and meta robots tag problems. A hugely useful tool if you manage a varied client roster. 6. NewzDash Because I work with several large news publishers, I need specialised tools to analyse a website’s visibility in Google News. This is where NewzDash comes in. Where Sistrix keeps track of regular search results, NewzDashb monitors Google News. There are many different ways in which Google shows news results, both in the dedicated Google News vertical and as part of news boxes in regular results on desktop and mobile. NewzDash monitors all of these, and provides visibility graphs showing how different news sites perform over time. Additionally, NewzDash can also be used to see what trending news topics a website is covering, and what topics it isn’t showing up for. The latter is very useful data to give to a newsroom. An alternative to NewzDash is Trisolute News Dashboard, a similar tool for Google News and Top Stories rank tracking. 7. SEOInfo I stopped using Chrome and switched to Firefox a few years ago (and you should too), which caused me a bit of a challenge in terms of browser plugins. Many SEO plugins are made for Chrome only, with no Firefox version. Then SEOInfo came around and saved my bacon. SEOInfo is pretty much the only Firefox extension you need for SEO. It gives you all the relevant info such as on-page SEO elements, meta tags, load speed, mobile usability, HTTP headers, and so much more. It also has a built-in structured data validator, and a SERP snippet simulator that shows how the page’s listing would look in Google results. All in all it’s a plugin I’ve come to heavily rely on in my day to day SEO work. 8. Google SERP Checker Because I work with publishers all over the world, I need to be able to see search results from different countries and in various languages. This is where the Local & International Google SERP Checker comes in. With this simple web app, you can mimick Google search results from any location and language, showing you what searches based there would see for that same query. This is incredibly valuable, allowing me to check for specific search features as well as rankings in Top Stories boxes and other search elements. Plenty More The tools listed above are the ones I use most often, but don’t represent the full extent of the arsenal of tools at my disposal. There’s plenty of other tools I rely on for bits and pieces, such as GDDash, Lumar, and of course Google Search Console. There’s one tool I haven’t yet mentioned that prize above all others: critical thinking. When you become overly reliant on tools, you lose the ability to analyse SEO issues properly, and you’ll start missing things that tools might not necessarily be able to spot. In SEO there are no more shortcuts. No tool in the world is going to turn you in to an SEO expert. Tools can certainly make some aspects of SEO much easier, but in the end you’ll still have to do the hard work yourself.

  • Google News vs Donald Trump: Bias in Google’s Algorithms?

    This morning US president Donald Trump sent out a few tweets about Google News. Since optimising news publishers for Google News is one of my key specialities as a provider of SEO services, this piqued my interest more than a little. In his tweets, Trump accuses Google News of having a liberal anti-Trump bias: “96% of results on “Trump News” are from National Left-Wing Media, very dangerous. Google & others are suppressing voices of Conservatives and hiding information and news that is good. They are controlling what we can & cannot see. This is a very serious situation-will be addressed!” The source of Trump’s information regarding Google News’s perceived bias is the right-wing blog PJ Media, who published a story about the sites that Google News lists when searching for ‘Trump’ in Google and selecting the ‘News’ tab in search results. According to PJ Media, “Not a single right-leaning site appeared on the first page of search results.” This is the chart that PJ Media used to determine if a listed news site is right-wing or left-wing: Putting aside the questionable accuracy of this chart and the tiny sample size of PJ Media’s research, there is a valid underlying question: can algorithms be truly neutral? Google News vs Regular Google Search First of all we need to be clear about what we mean when we say ‘Google News’. Google’s search ecosystem is vast, complex, and intricately linked. Originally, Google News was a separate search vertical that allowed people to search for news stories. It was soft-launched in beta in 2002 and officially launched in 2006. Then, in 2007, came Universal Search. Google started combining results from different verticals – images, videos, news, shopping – with its regular web search results. This was the start of Google’s SERPs as we know them today: rich results pages where pages from the we are combined with relevant news stories, images, and knowledge graph information. This is still the norm today. Take, for example, Google’s regular web search result for ‘trump’: In just this one search results we have a knowledge panel on the right with information on Trump, related movies & TV shows, Trump’s official social media profiles, and a ‘People also search for’ box. In the main results area we have a Top Stories carousel followed by recent tweets from Donald Trump, relevant videos, a ‘People also ask’ box of related searches, a box with other US presidents, another box with political leaders, and a box with people relevant to Trump’s wife Ivana. And amidst all this there are nine ‘regular’ web search results. While Trump’s official website is listed, it’s not the first regular result and the page is dominated by results from publishers: The Guardian, BBC, The Independent, Washington Post, The Atlantic, Vanity Fair, and NY Magazine. There’s a reason publishers tend to dominate such search results – I’ve given conference talks about that topic – but believe it or not, that’s not where news websites get the majority of their Google traffic from. Nor is the news.google.com vertical a particularly large source of traffic: it only accounts for approximately 3% of traffic to news sites. So where does publishers’ search traffic come from? Well, news publishers depend almost entirely on the Top Stories carousel for their search traffic: Especially on mobile devices (which is where the majority of Google searches happen) the Top Stories carousel is a very dominant feature of the results page: According to research from Searchmetrics, this top stories box appears in approximately 11.5% of all Google searches which amounts to billions of search results pages every single day. This is why news publishers work so very hard to appear in that Top Stories carousel, even when it means implementing technologies like AMP which are contrary to most news organisation’s core principles but is a requirement for appearing in Top Stories on mobile. Of course, search is not the only source of traffic for news publishers, but it is by far the largest: News publishers don’t really have much of a choice: they either play by Google’s rules to try and claim visibility in Google News, or try and survive on the scraps that fall from Google’s table. For me the interesting question is not ‘is Google News biased?’ but ‘how does Google select Top Stories?’ The answer to that question has three main elements: technology, relevancy, and authority. The Technology of Google News & Top Stories The technical aspects of ranking in the Top Stories carousel are fairly straightforward, but by no means simple. First of all, the news site has to be included in the Google News index. This is not optional – according to NewsDashboard, over 99% of articles shown in Top Stories are from websites that are included in the Google News index. Because this news index is manually maintained, there is an immediate opportunity for accusations of bias. The people responsible for curating the Google News index make decisions about which websites are okay and which aren’t, and this cannot be a ‘neutral’ and ‘objective’ process because people aren’t neutral and objective. Every news site that is accepted or rejected is done so on the basis of a human decision. As all human decisions are subject to bias – especially unconscious bias – this makes the initial approval process already a subjective one. Secondly, the news site needs to have certain technical elements in place to allow Google News to quickly crawl and index new articles. This includes structured data markup for your articles, and a means of letting Google know you have new articles (usually through a news-specific XML sitemap). Both of these technologies are heavily influenced by Google: schema.org is a joint project from Google, Bing and Yahoo, and the sitemaps protocol is entirely dependent on search engines like Google for its existence. Thirdly, you need to have valid AMP versions of your articles. Some may see this as an optional aspect, but really, without AMP a news site will not appear in Top Stories on mobile search results. This presents such a catastrophic loss of potential search traffic that it’s economically unfeasible for news websites to forego AMP. While AMP is presented as an open source project, in reality the vast majority of its code is written by Google engineers. At last count, over 90% of the AMP code comes from Googlers. So let’s be honest, AMP is a Google project. This gives Google full technical control over Google News and Top Stories – in Google’s own crawling, indexing, and ranking systems, as well as the technologies that news publishers need to adopt to be considered for Google News. Publishers don’t have all that much freedom in designing their tech stack if they want to have any hopes of getting traffic from Google. Ranking in Top Stories The other aspects of ranking in Google News and Top Stories are about the news site’s editorial choices. While historically the Top Stories algorithm has been quite simplistic and easy to manipulate, that’s less the case nowadays. Since the powerful backlash against holocaust denial stories appearing in Google News, the search engine has started putting more resources in its News division, with a newly launched Google News vertical as the result. The algorithms that decide which stories show up in any given Top Stories carousel take a number of aspects in to consideration: Is the article relevant for this query? Is it a recently published or updated article? Is it original content? Is the publisher known to write about this topic? Is the publisher trustworthy and reliable? In Google News there is also a certain amount of personalisation, where Google’s users will see more stories from publishers that they prefer or are seen as geographically relevant (for example because it’s a newspaper local to the story’s focus). And of course, a lot of the rankings of any given news article depend on how well the article has been optimised for search. A classic example is Angelina Jolie’s column for the New York Times about her double mastectomy – if you search for ‘angelina jolie mastectomy‘ her column doesn’t rank at all, and at the time it didn’t appear in any Top Stories carousel. What you see are loads of other articles written about her mastectomy, but the actual column that kicked off the story is nowhere to be found. One look at the article in question should tell you why: it’s entirely unoptimised for the most relevant searches that people might type in to Google. Some journalism purists might argue that tweaking an article’s headline and content for maximum visibility in Google News is a pollution of their craft. Yet journalists seem to have no qualms about optimising headlines for maximum visibility at news stands. News publishers have always tried to grab people’s attention with headlines and introduction text, and doing this for Google News is simply an extension of that practice. Yet even with the best optimised content, news publishers are entirely dependent on Google’s interpretations of their writing. It’s Google’s algorithms that decide if and where an article appears in the Top Stories carousel. Algorithms Are Never Neutral According to Google, the new version of Google News uses artificial intelligence: “The reimagined Google News uses a new set of AI techniques to take a constant flow of information as it hits the web, analyze it in real time and organize it into storylines.” This seems like an attempt at claiming neutrality by virtue of machines making the decisions, not humans. But this doesn’t stand up to scrutiny. All algorithmic evaluations are the result of human decisions. Algorithms are coded by people, and that means they will carry some measure of those people’s own unconscious biases and perceptions. No matter how hard Google tries to make algorithms ‘neutral’, it’s impossible to achieve real neutrality in any algorithm. When Google’s algorithm decides that the story from Site A should appear first in the Top Stories carousel, and a similar story from Site B should be way down at the end of the carousel (or not in there at all), that is the result of countless human decisions – some large, some small – about what constitutes relevancy and trustworthiness. Even with a diverse base of employees from all different backgrounds and walks of life, creating neutral algorithms is immensely challenging. Senior engineers’ decisions will almost always outweigh junior staff’s decisions, and some people’s biases will be represented in those editorial decisions that are made about how an algorithm ranks content. And here Google can be very rightfully accused: it has an incredibly homogenous employee base. Ironically, while Google reports on its employees’ ethnicity and gender, it doesn’t report on political leanings – which is what sparked the furore about their lack of diversity in the first place. So we have no way of really knowing if Google’s engineers come from varied political backgrounds. This leaves Google wide open to criticisms of bias, and it’ll be very hard to dismiss those concerns. Is Google News Biased? To return to the question of bias in Google News, does Donald Trump have a point? The PJ Media article that sparked the controversy is deeply flawed and entirely unrepresentative, but there are other sources that point towards a left-leaning bias in Google News: Yet simply looking at Google News search results and evaluating their diversity of opinion is a dangerous approach, because it fails to look at the underlying dependencies that go in to creating that result in the first place: the technological demands placed on news publishers, the skill of individual journalists to optimise their articles for Google News, and the ability of news organisations to break stories and set the news agenda. And we can’t leave out the fact that Google openly admits to making editorial decisions in Google News. Yes, actual people choosing stories to show up for trending topics. From its own relevant support documentation on curation: The choice of language is very interesting: by using phrases like ’empirical signals’ and ‘algorithmically populated’ Google intends to create the perception that these human curators have no real editorial influence over what is shown in Google News. Yet, even if we accept the notion of Google’s curators being able to make neutral decisions (which we’re not), we know that algorithms are not neutral themselves, and – risking treading on philosophical grounds – there’s no such thing as ’empirical signals’ when it comes to news. Despite its efforts with the Google News Initiative, Google has done little to alleviate legitimate fears of bias in its ranking algorithms. In fact, due to its near full control over the entire process, Google leaves itself very susceptible to accusations of bias in Google News. With Google’s astonishing dominance in search with over 86% worldwide market share, this does beg the question: Can we trust Google? Should we?

  • View Source: Why it Still Matters and How to Quickly Compare it to a Rendered DOM

    SEOs love to jump on bandwagons. Since the dawn of the industry, SEO practitioners have found hills to die on – from doorway pages to keyword density to PageRank Sculpting to Google Plus. One of the latest hypes has been ‘rendered DOM’; basically, the fully rendered version of a webpage with all client-side code executed. When Google published details about their web rendering service last year, some SEOs were quick to proclaim that only fully rendered pages mattered. In fact, some high profile SEOs went as far as saying that “view source is dead” and that the rendered DOM is the only thing an SEO needs to look at. These people would be wrong, of course. Such proclamations stem from a fundamental ignorance about how search engines work. Yes, the rendered DOM is what Google will eventually use to index a webpage’s content. But the indexer is only part of the search engine. There are other aspects of a search engine that are just as important, and that don’t necessarily look at a webpage’s rendered DOM. One such element is the crawler. This is the first point of contact between a webpage and a search engine. And, guess what, the crawler doesn’t render pages. I’ve explained the difference between crawling and indexing before, so make sure to read that. Due to the popularity of JavaScript and SEO at the moment, there are plenty of smart folks conducting tests to see exactly how putting content in to JavaScript affects crawling, indexing, and ranking. So far we’ve learned that JavaScript can hinder crawling, and that indexing of JS-enabled content is often delayed. So we know the crawler only sees a page’s raw HTML. And I suspect we know that Google has a multilayered indexing approach that first uses a webpage’s raw HTML before it gets around to rendering the page and extracting that version’s content. In a nutshell, a webpage’s raw source code still matters. In fact, it matters a lot. I’ve found it useful to compare a webpage’s raw HTML source code to the fully rendered version. Such a comparison enables me to evaluate the differences and look at any potential issues that might occur with crawling and indexing. For example, there could be some links to deeper pages that are only visible once the page is completely rendered. These links would not be seen by the crawler, so we can expect a delay to the crawling and indexing of those deeper pages. Or we could find that a piece of JavaScript manipulates the DOM and makes changes to the page’s content. For example, I’ve seen comment plugins insert new heading tags on to a page, causing all kinds of on-page issues. So let me show you how I quickly compare a webpage’s raw HTML with the fully rendered version. HTML Source Getting a webpage’s HTML source code is pretty easy: use the ‘view source’ feature in your browser (Ctrl+u in Chrome) to look at a page’s source code – or right-click and select ‘View Source’ – then copy & paste the entire code in to a new text file. Rendered Code Extracting the fully rendered version of a webpage’s code is a bit more work. In Chrome, you can open the browser’s DevTools with the Ctrl+Shift+i shortcut, or right-click and select ‘Inspect Element’. In this view, make sure you’re on the Elements tab. There, right-click on the opening tag of the code, and select Copy > Copy outerHTML. You can then paste this in to a new text file as well. With Chrome DevTools you get the computed DOM as your version of Chrome has rendered it, which may include code manipulations from your plugins and will likely be a different version of Chrome than Google’s render of the page. While Google now has their evergreen rendering engine that uses the latest version of Chrome, it’s unlikely Google will process all the client-side code the same way as your browser does. There are limits on both time and CPU cycles that Google’s rendering of a page can run in to, so your own browser’s version of the rendered code is likely to be different from Google’s. To analyse potential issues with rendering of the code in Google search, you will need the code from the computed DOM as Google’s indexer sees it. For this, you can use Google’s Rich Results Testing tool. This tool renders webpages the same way as Google’s indexer, and has a ‘View Source Code’ button that allows you to see – and copy – the fully rendered HTML: Compare Raw HTML to Rendered HTML To compare the two versions of a webpage’s code, I use Diff Checker. There are other tools available, so use whichever you prefer. I like Diff Checker because it’s free and it visually highlights the differences. Just copy the two versions in to the two Diff Checker fields and click the ‘Find Difference’ button. The output will look like this: In many cases, you’ll get loads of meaningless differences such as removed spaces and closing slashes. To clean things up, you can do a find & replace on the text file where you saved the raw HTML, for example to replace all instances of ‘/>’ with just ‘>’. Then, when you run the comparison again, you’ll get much cleaner output: Now you can easily spot any meaningful differences between the two versions, and evaluate if these differences could cause problems for crawling and indexing. This will highlight where JavaScript or other client-side code has manipulated the page content, and allows you to judge whether those changes will meaningfully impact on the page’s SEO. DirtyMarkup Formatter When you do your first DiffChecker comparison, you’ll quickly find that it’s not always very useful. When a page is rendered by Google, a lot of unnecessary HTML is stripped (such as closing slashes in HTML tags) and a general cleanup of the code happens. Sometimes a webpage’s source code will be minified, which removes all spaces and tabs to save bytes. This leads to big walls of text that can be very hard, if not impossible, to analyse: For this yeason, I always run both the raw HTML and the fully rendered code through the same code cleanup tool. I like to use DirtyMarkup Formatter for this. By running both the HTML source and the rendered DOM through the same cleanup tool, you end up with code on both sides of the comparison that has identical formatting. This then helps with identifying problems when you use Diff Checker to compare the two versions. Comparing two neatly formatted pieces of code is much easier and allows you to quickly focus on areas of the code that are genuinely different – which indicates that either the browser or a piece of client-side code has manipulated the page in some way. Built-In Comparison If all of the above sounds like a lot of manual effort, you’re right. That’s why SEO tool vendors like DeepCrawl, Screaming Frog, and Sitebulb now have built-in comparison features for HTML and rendered versions of each crawled page on your site. I still prefer to look at manual comparisons of key pages for every site that I audit. It’s not that I don’t trust the tools, but there’s a risk in only looking at websites through the lens of SEO tools. Nothing beats proper manual analysis of a webpage when it comes to finding SEO issues and making informed, actionable recommendations.

  • Technical SEO in the Real World

    In September 2018 I gave a talk at the awesome Learn Inbound conference in Dublin, where I was privileged to be part of a speaker lineup that included Britney Muller, Wil Reynolds, Ian Lurie, Aleyda Solis, Paddy Moogan, Laura Crimmons, Jon Myers, and many more excellent speakers. My talk was about some of the more interesting technical SEO conundrums I’ve encountered over the years. The folks at Learn Inbound recorded the talk and have made it available for viewing: I gave an updated version of this talk two weeks later at BrightonSEO, so if you missed either one of those you can now watch it back for yourself.

  • Build Conference – a must-attend for web designers

    I had the honour and privilege to be present at yesterday’s Build conference, an annual (web) design conference hosted in Belfast’s Waterfront venue. Organised by local Northern Irish talent Andy McMillan, Build is one of those conferences that provides nourishment for the design-geek’s soul: cool schwag, great talks, and more Mac logos than I was comfortable with. It even boasted a caffeine monitor that kept track of the amount of caffeinated beverages consumed by conference delegates. From Click To Tap – Keegan Jones & Tim Van Damme The first talk was by Keegan Jones and Tim Van Damme who both look and talk like stereotypical web geeks. They spoke about design for mobile, specifically mobile apps, and gave some great tips on how to make the best use of limited screen real-estate and what to keep in mind when you embark on your mobile web/app journey. More Perfect Typography – Tim Brown This presentation by soft-spoken – but very intense – Tim Brown appeared to be one of those typical design obsessive things, but sometime halfway through the talk it suddenly clicked for me. Tim Brown makes the case that web design should start with a choice of type, as this not only colours the content (try reading a piece of text in Times New Roman, and then in Comic Sans, and see how different you interpret it) but can also help you scale your entire design. By using your chosen font’s optimal size as a starting point and then scaling up with the use of for example the Golden Ratio (1:1.618) you can create a design that somehow fits well and feels right. The Shape Of Design – Frank Chimero Where the first talk was done by typical web geeks, Frank Chimero is a typical design geek – tweed jacket, hip tie, and Apple-addicted. His talk was a somewhat rambling affair about the role of a designer and what the perceived and real added value of design is. It all boiled down to that wearing old mantra that we have to be authentic and real and somehow try to ‘tell stories’, whatever that means. Don’t get me wrong, it was an entertaining talk, just not particularly innovative or insightful. Adding By Leaving Out – Liz Danzico I didn’t take a lot of notes during this talk which is very appropriate, as Liz Danzico talked about the power of omission. Liz spoke about how silence can have a lot of meaning and how white space is an active element of a design instead of a passive background. While interesting and thought-provoking, the talk lacked concrete advice – which was probably intentional, as Liz likely meant to inspire rather than lecture. Conquer The Blank Canvas – Meagan Fisher Meagan Fisher, a self-proclaimed owl-obsessive, laid out her four-step design process in this talk. She seemed a bit nervous on stage (and who wouldn’t be, being stared at by 300+ geeks and nerds) but she really didn’t have any reason to as her talk was probably the most fascinating and insightful one – for me at least. Not only did she gives us a great insight in to how she manages her design process and deals with each facet, her slides were also the most visually astounding. This talk delivered a double-whammy, as Meagan’s design process gave the audience very useful tips and insights and her slides served as a rich source of design inspiration as well. Due to other obligations I missed the last talk of the day which was Dan Cederholm talking about handcrafted CSS, but as Dan’s reputation precedes him I have no doubt that it was a superb talk. While these talks form the core of the Build conference, they only take up one day of what is an elaborate and highly entertaining week full of activities including workshops, a pub quiz, lectures, and even a film showing at Queen’s Film Theatre. All in all I can say that Build is a conference every self-respecting (web) designer should try to attend. Some speakers have already been confirmed for the 2011 edition, and if you are at all involved in web design I highly recommend you try to be there.

bottom of page