This is a great talk from one of my favourite Googlers – he really tries to explain search and Google whenever I have seen him speak. If you can’t watch the video (you will need sound) I have got the transcript below.
Video Transcript
Hello, everybody, how are you doing? Enjoy it lunch? had a good time. Whoo. Yeah. It’s always fun to be in the spot. Alright. Cool. I’m really happy to see that some people made it into this talk, because I know that SEO is not the most popular topic with developers. But I think it’s worthwhile talking about it. So today, I’d like to take some time to talk about technical SEO and demystify it a little bit because you know, sometimes it feels like a little bit of black box, I keep hearing that face a few times. And I’m here to open this up. And I’ll start that by basically explaining what the hell actually it is, and why we as developers have seen it and how we can help our websites be successful in search. We’re also going to talk about how that works, how the process of indexing and rendering and you know putting something into search works.
This process is more or less the same for all search engines. But I can only represent our search engine, which is Google search. But pretty much most of the things are more or less the same and other crawlers as well, because the problem is pretty much the same Anyways, we’re also going to talk about a bunch of things that we can do specifically for Angular applications, and a few pitfalls that can happen and how to avoid those and like common challenges and things that I see going wrong sometimes. So we, we will figure out how to avoid them and how to do best with Angular in terms of search. And then we’ll talk about how you can actually test your site, because there’s a bunch of testing tools that I think are under appreciated, and I will definitely like to point some of these out to you. And last but not least, to wrap it all up.
I’ll leave you with some resources that you can use to share with your co workers or like read up on more details, and basically figure out what you need from that.
But let’s start with figuring out what is this SEO thing and if We talk about developers, then we are knowledge that we have a wide variety of things, right?
Who here is an Angular developer? Who here does back end?
It’s not the same hands going up. And that’s fantastic. So like, there’s plenty of different things that you can do in development. And actually, SEO is quite broad as well, as it turns out, and I can’t give a comprehensive definition of what SEO is because like everyone does it slightly differently.
But I’ll tell you what I think an SEO should be doing and does if they’re doing things right, as far as I am aware. So they don’t have to do everything like that.
But I would say it’s a good thing to think of SEO as that. And by that, I mean, SEOs usually help you with your content. What does that mean?
It doesn’t mean that they write the content necessarily themselves, but they figure out together with marketing and sales, what are the terms that we should be using?
What are the terms people are looking for, and what is it that we are offering and how can we describe that In the most humane way, because oftentimes companies forget what people are you like the words people use to talk about the thing that they are producing or that they’re selling.
And SEOs can help identify these things and figure out what good content would look like what kind of things can you produce?
What kind of websites could you produce, to help people identify what they need, and then probably, hopefully convert on your website by selling something or buying something or using your service in a certain way.
That also ties into strategy. When I say strategy, what I mean is they would talk to other business stakeholders and figuring out what is it that we’re doing and how can we do it better for the people who are looking for what we are offering so that we can bring the people who look for our stuff and our stuff together better online.
Now, these are things that we are not really concerned with that much, right. We have our hands full with other things, but luckily, I’m here to talk about the technical side of SEO.
So there are technical requirements. There is a technical process basically What you’re doing is you’re dealing with another technical system consuming your web content.
So it is something that we need to consider when designing our systems and designing our websites and web apps, as well as we need to make sure that it works.
And we need to make sure that it keeps working. And that’s something that a good SEO can also help you with, especially if it’s a technical SEO consultant or an SEO person on your team that were more who’s more technical, they can help you figure out is what we’re gonna deploy gonna be working is what we already have online, still working and stuff like that.
And they can also probably point you at resources to help you implement the next feature or fix the next bug that is related to SEO.
So there is this like, overlap here. And I think developers, we as developers can benefit from that not having to do all this ourselves, and having to learn everything about search engines and SEO, but having this ally on our team, who hopefully helps us understand what we need to do and also help So communicate to the other people on the team what they need to do before that.
So I’m going to focus on that. But before we go into the technology thing of SEO, the technology aspect of SEO, I’d like to quickly say one thing.
Without good content and good strategy of how you put that content in the hands of users. Your SEO efforts on the technical side are basically chocolate ice cream smiley. Because it’s polishing a turd.
That’s what it comes down to. If your content isn’t good, no technical implementation fix this will fix that. So basically saying, if your website is really fast, but displays an error, that’s not gonna be good.
If your website is fast and uses the modern, most modern technology in this fantastic user experience, that’s not good if it doesn’t say what it is about and doesn’t help me do what I need to accomplish.
And to give you an example, a more concrete example on that. I grew up in Germany, I’m German. Sorry.
I had zero sense of humour. But I do enjoy breakfast, breakfast is like the most important meal of the day.
And I like it with my toast, but my toaster broke down.
So I’m looking for a new toaster. And being again, being very German, I look at it from an analytical point of view, like I research stuff online. And now I find this website.
And if I look at it, and I look at the most prominent part of it, that I’m like, okay, so smart, simple and beautiful, whatever that means, that can be anything. Right?
My dog can be smart, simple and beautiful. doesn’t really help me. And then it says it will disrupt your breakfast. I don’t want my breakfast disrupted.
I want a peaceful and quiet it’s the, you know, start of the day, and it’s the most important meal. I don’t want that disrupted and it’s like thermal chemical food processing.
What does that even mean? Do you cook my bread? Like what? What is this?
So I have no idea what this website is about. But I might look at the other links that they provide me with and here we have like our philosophy I don’t want to study philosophy I want to eat bread.
Hot bread extent, I still don’t know what this hot bread thing really is. So I’m not going to click on that either.
And then it’s like Join the movement.
I don’t know about you, but I’m pretty happy with my religion.
I don’t want to join a cult. So what is this website and you’re laughing at this but I see so many companies putting out websites that are essentially this when what they should be putting out is something that looks like this. I’m looking for a toaster. This is the fastest toaster that’s very clear.
It says fastest toaster never burn your toast again. Get your toast faster. Try toast in the toaster face.
But again, I’m German so I’m like I don’t want to do this commitment without having done my research.
But I actually I don’t know about you, but I don’t know how to choose a good toaster like what’s the right toaster for me?
I don’t know. So luckily, these people have had a strategy and they said look, we’re gonna help you choose the toaster.
And we’re going to show you other toasters as well. So if this is I don’t know if this is the right toaster, but here, I can Find out
if it is the right toaster them should be buying.
And if it’s not, then I find the other alternative toasters. If it happens to be the right toaster, for me, that’s a very clear call to action, buy a toaster.
That’s perfect. That’s fantastic. That’s exactly what I needed. This is a good website. This is good content. And it has the
strategy for me as someone looking for toasters, the first website, not so much.
So please build more websites like that, not the others.
And I know that you’re not responsible for the content.
But I would love if because you can be pretty sure if there is an SEO consultant on the team, and marketing came up with the first version of this website, they’ll be like, maybe we should rethink our content, but sometimes they’re not being listened to.
And you can be another voice on the team that says, like, I, if I were buying a toaster, I wouldn’t go to the other website.
I will go to this website. So help your SEOs out as well.
That would be fantastic. But now let’s actually go into the technology side of things.
And I would like to start with how does your website actually end up in Google?
How does Google Search work in From the perspective of me as a web developer, so Google search, or the Google bot process starts with a list of URLs that we happen to know maybe someone linked to your website, or you submitted your website. or somehow else, we figured out that they are your web one of your website, or one of your pages of the website exists.
And we have it in the long list of URLs that we’re going to look at.
And from this list of URLs, we’re taking individual URLs out and put them into what we call the crawler.
Now, the crawler is the first step of the process.
And all it does really is it checks if it can actually crawl this page by looking at the robots. txt.
So we are nice and friendly, you can tell us not to crawl your website, or specific pages of your website.
If we can crawl, what we’re going to do is we’re going to make an HTTP request, and then we’re going to get the content. Now this is usually if it’s a website, and it’s HTML, we get the HTML content, we process that and we look for links in the HTML content. So if you find new URLs, those content straight back to the URL Q and we can crawl these in parallel or later, depending on how the resource situation looks like.
And then once we have done that we process we try to figure out using the HTML markup that you provided, trying to figure out what is this page about, is this about toasters? Is this about boats.
This is about kittens. It’s about hummus. Right? And then once we know what this is, we find it in the index.
It’s basically a huge database. So in this database, we have all sorts of content. And we know what each of these URLs is about basically.
However, if you’re building Angular web apps, or any JavaScript client-side rendered web apps, your content or what we got from crawling looks like more or less like this.
What is this?
What is this website about?
I don’t know. So is that a problem?
Well, actually, this might be a problem for some crawlers, there are crawlers on the internet that do not use run JavaScript. So that’s all they see, specifically the social networks, if you want to share a link on social networks, and use like these Open Graph tags, if they are not in that HTML, they’re not going to see them.
Luckily for you, Googlebot actually knows how to run JavaScript.
And we have been doing that for a couple of years. But the web, as it turns out, is quite big.
So we can’t really do that all in one goal. Like we can’t just casually run all the JavaScript on the internet, that doesn’t really work like that, because it turns out to be quite a big number with a lot of zeros.
It’s 130 trillion, or it was in 2016.
That’s the latest I’ve got as an approved number.
So I can’t really say how much we’ve got right now.
But it’s more, let’s just say it grows. So instead of just running it all in one go, what we do instead is we have another queue.
So we put your content into another queue. And whenever we have the resources available, because it turns out the cloud is just someone else’s computer.
So we know we have to wait for these computers to become available, but then we render your page.
That means we’re basically opening it with a headless chromium metallus, chromium downloads all the resources that we will have mentioned in your HTML, and then executes the JavaScript and then produces HTML that we’re going to pass back.
And this pass back then does the same thing.
It looks for links that we found in the HTML now and puts them into the crowd queue and puts the content into the index.
So that works.
Now a lot of people know that.
It used to be an old version of Chrome running and lots of SEOs will probably going to point that out if they do point them to this wonderful resource.
At io this year, we announced that Googlebot is now running evergreen chromium.
So whenever a new stable version of Chrome is released, within a couple of weeks, Googlebot updates to it as well.
So we can now run here six, we can now run Web Components version one API’s and all that lovely stuff.
And plenty of web API’s are now available. If you want to learn more about that. Highly recommend reading the blog post course.
Now we can index like Google Search can actually index your JavaScript content.
That’s great. There’s one more step that happens.
I’m not going to really talk about it that much.
But I want to mention it.
So now we have your stuff in the index.
We know your website is about toasters, and is the fastest toaster and whatnot and has like all this lovely content.
But there’s plenty of other pages that are also about toasters.
So which ones do we going to show first to the user, which ones are the first search results that the user is going to see?
That’s determined by ranking, we look at hundreds of factors.
And that doesn’t necessarily say something about your quality because if you are talking about the fastest toaster, someone looking for the cheapest toaster might not be served well with your website.
So we might want to ring someone else higher for this specific query.
But for the fastest toaster, your website might be better, very complicated topic very, very much out of your control anyways, so I highly recommend just ignoring that for the moment and I’m not really going to talk about ranking much today.
So we are focusing on the three things that you can enjoy. crawling, rendering and indexing, getting your stuff into search is where you can help.
So how can you have Googlebot?
Well, the first thing is, as we saw, there are two passes where we are looking for links.
So the easiest way to help us find other pages from your website is to link between them in the independent in the different pages.
You don’t have to do that specifically for Googlebot don’t add links just for Googlebot, but add links for the users to also navigate through.
So basically, the flow that you want the user to take is probably more or less what Google was gonna take as well.
Also, when I’m saying linking, I actually mean linking. Googlebot doesn’t click on stuff, so don’t use like a button.
If it goes somewhere else, it’s a link. So use links a tag with href.
Okay. Don’t use a span with an onClick handler.
Don’t use a button.
Don’t use I don’t know a select box.
I don’t know like people are sometimes interesting, the creative when it comes to that kind of stuff.
Also, when I say URLs Do not Angular doesn’t do it anymore.
Other frameworks have caught up as well.
But if you are still having hash based routing, so like hash slash products or hash slash
product slash toaster, we don’t actually crawl these properly.
So the problem there is these hashes are used to be the are actually fragments, so part of the same content so we can ignore them.
Because if we get the URL without the hash, we get the entire content, and we’ll be fine.
But if you’re using the hashes to load different content, then that breaks this very fundamental standard, which used to be the case we had to do that in like 2014,
I believe.
But we have the History API, and it’s the default routing mode.
So if you still have hash based URLs, do not use them or give us another way of seeing your content.
Also, if you have pages that you know, are very thin on content, because there was like landing pages for various specific campaigns, something or their user-generated country And you don’t know if they’re good quality.
If you let people post restaurant reviews or something, then you might want to make sure that the reviews are good quality or that the postings that people are making are good quality before putting them in, because they are concurring like basically, if you have a million pages, and we are picking them one by one, and then there’s a batch of 250, bad posts, we’re going to see them and we’re going to be like, Okay, this is bad quality.
But you know, we have to go to another website now and crawl that.
So we might not see the next batch of good quality content.
So you want to make sure that you’re giving us hints and not to spend time on low-quality pages that might not even make it to the index, because we’re not necessarily going to index low-quality pages anyways.
If you want to point us to specific URLs that you want to end up in the index and don’t want to necessarily link to them or want to help us, give us another signal for what you consider a URL that you want an index, then use a sitemap XML file, but that’s optional. Cool.
Now, that’s very generic.
But what can we do in terms of Angular? I have an Angular app, what can I do to make it a little more successful in Google search?
The first thing that people tend to ignore and overlook is a very simple thing.
And that’s titles and meta descriptions.
If I’m looking for a specific recipe, if I’m going like, I want a party recipe or a recipe to do something for a birthday party, and I come to these, these search results, then it’s like, okay, so maybe these are fantastic party recipes.
But which one do I click on?
Because it’s all Barbara speaking blog. And maybe the description is exactly the same as one I’m like, you don’t know.
Which one should I click on is not a good thing. Except if I would give people a better description and a better title for each of my pages specific to the recipe that this page is about, then the user can make a decision better.
So basically, this is not again, not ranking has nothing to do with ranking is just these pages are really good in terms of having potty recipes.
But what does it help you to rank well on them if no one clicks on them either, again, not about ranking.
This is about helping the user make the right decision and end up on the right page of your website.
Because if I click on the left-hand side if I click on the first, and that’s like an apple pie, I’m like, but I don’t like apple pie, I would like something else.
That’s not really helping anyone is it?
To do that in Angular, you have two built-in helpers to build and services, the title service and the meta service. use them in your component to specifically give me a title that makes sense.
In this case, I use the recipe title in my title, and I have some recipes, snippets, some description that tells the user What is this why is this a good party recipe for instance, like the description might say something all fantastic brownies for birthday party, easy to make, and you can make lots of them really quickly.
That’s a good recipe to like, prepare something for a party. I love that.
So use the meta service and the title service to provide page-specific information for the users to see and service Results.
Also, if you have something that you don’t want in the index some page that you know is not good or something that you just don’t want to show up in search results, then use the meta service with the robots meta tag and set that to no index.
If you see a no index, we’re not going to index it. Surprise, right. So that’s, that’s a fantastic way of making that happen.
That’s how you exclude things to exclude. You can also do something else we’re going to look into into that in a second.
But I would like to talk about URLs for a moment and specifically URLs that are pointing to the same thing.
So you probably have something like this, right? You have a recipe and in this case is the Cupcakes Recipe, but you want to be nice to users.
So you’re treating them case and sensitively, right.
But those technically still are two different URLs, they’re just pointing to the same content.
And maybe you have like some legacy stuff going on.
And you also need to support the recipe ID as the parameter here and you’re like, if it’s an American, it’s probably an ID.
And maybe it’s like Really, really, really old system behind it and you have URLs like this, that’s completely fine.
It’s just, we would have to figure out which one to show in search results.
And we might not pick the one that you want, especially if you’re planning on retiring the last two, if you like, we don’t want these to be around forever, you should redirect them, and eventually tell us to not use them for indexing and showing them in search results anymore.
You can do that which well look what we call with a canonical.
Now, how do you call it tell us the what’s the canonical URL?
Well, there is nothing in Angular but you could build your own service.
So a lovely gentleman, Hamlet, talk to me on Twitter about this and said, like, Look, this, do you think this is a good strategy?
And I’m like, this is actually this is how I do it. I just never thought about it.
But yeah, sure. What you can do here is you basically create yourself a service that injects the document, and then you create a link tag and then you set The relation and href and put it in the head.
That’s what you need to do. And that’s all you need to do.
The cool thing is, once you do that, once you have your service, you can use your service in your component, then tell us what is the canonical URL that you want us to use. In this case, we have some configured base URL, and we want to use the recipe name in the URL rather than any ideas or something.
However, by giving us this canonical, you’re suggesting a canonical to us, we might find that it’s not a good thing, because maybe you misconfigured it or maybe we have another good reason.
There’s plenty of reasons as complicated as it always is. But normally, we pick this sometimes we won’t pick it that has reasons and normally it’s fine.
You can also prevent us from crawling, I mentioned that early on with robots. txt, you create a robots.txt and basically tell the user agents that you don’t want to crawl a specific set of URLs or a specific URL to not crawl anything.
So anything under private it’s like a substring match.
Everything under private will not be crossed.
That doesn’t mean that we don’t index it, it just means that our crawler will not touch on it. Why is that a difference?
Well look at it from this perspective.
Someone somewhere links to a URL slash private slash manage bit high school photo dot jpg, and says, Here’s mine.
That’s absolutely ridiculous High School photo. we crawl that website, because it’s someone else’s website, and they are happily indexing things.
And we find this link will I call so we put that link into the crawl queue.
The crawler goes, Oh, I can’t I can’t crawl this.
But we can still put it in the index in the end, because we know as far as we know, this is about Martin space, stupid High School photo.
It’s not really a useful document because most of the signals are based on crawling and we can get them so we probably rank it really low, but it’s still probably ending up in the index.
It doesn’t mean that it shows up in search results, but it could so use the no index robots meta tag to block something from the index and use robots to prevent us from crawling that can be useful, especially if you have like some specific legacy URLs that you just don’t want the crawler to hit.
Or if you have something like some API’s that you only call like, in your mobile app, and you don’t want the bots to crawl them, but be careful with that, because you can actually shoot yourself in the foot.
So this is a website about cats. And as you can see down here, there’s no images, all the cat images are missing.
Why is that? Well, the page is mobile-friendly.
That’s fantastic. But there’s no cap images. So this is actually pretty bad content.
Because it’s an empty page. What happens here is that someone thought this API that we are having, that one is actually quite like maybe it costs us money or something, but it’s being called from the JavaScript in the front end.
But we don’t want the crawler to go for it because you know, maybe this request cost us money or something.
We blocked the crawler from it.
But that means that when we are rendering the page, we’re making the same call
and we are still obeying robots, even if it’s like an AJAX call.
So we are not actually making this call and then we are not getting any CAD.
So be really, really careful with what you’re putting into your robots.txt.
Don’t be too excited about putting everything into robot like, Oh, we can stop the crawler from making requests to our API.
Yeah, but if that needs to happen in the front end, I wouldn’t robot it away.
Another thing is, if you want your search results to look a little nicer, you might want to look into what we call rich results. So here you see two examples.
Here we have like some recipes they have, in this case, even a video probably or like a gallery of images.
We have the provider we have the name, we have some ratings, we have 276 reviews. For this one, we have calories and preparation time.
For products, we might also have like rating oops, what’s, what’s it doing?
Okay, good for you. We have some ratings here.
We have votes, we have the price and so on and so forth.
That’s what we call rich results. And how do you get them?
Well, it has to be a high quality site to begin with.
But once we have established that you can add something that we call JSON LD or structured data.
If you structured data, it’s a standard. It’s basically a bunch of standards from schema.org.
It’s an open open Consortium.
And they are producing the standards. And then you can put some blob-like this some Jason Linked Data blob into your website that describes in this case, it describes an event.
And then we might pick it up and might show some rich results for this specific event. For instance, now you have a testing tool for that. It’s called the rich results test.
You can plug in your URL or actually code as well, you have both options.
And then you see if your page would be eligible or if there’s any problems or warnings or recommendations that you should follow. There’s plenty of verticals that we support, like events, products, recipes, movies, articles, courses.
In some countries, we support jobs, and so on and so forth. go to that link, check it out. There’s a gallery of all the verticals that are supported, and the markup that you need to put into your website to get the results for them.
Now let’s talk about performance.
So this is My competitor, they have dog images.
And this is, you know, that was not exactly the best performance in terms of if I’m on a mobile and I want to see a dog, then that takes a while to get there.
If we look at it, what kind of metrics should we probably be using?
Because Google looks actually also add performance.
It’s one of the many ranking factors. And I get asked like, Is it time to first buy it? Or is it time to first content for paint or the time to interactive?
That really depends.
And it’s a little harder to say, then I can’t really disclose which specific metric we’re using.
But think of it from this perspective. What does the user need?
What does the user want? I want to see dogs. So for me, the most important metric on this particular page is the time to first toggle. Right.
And this isn’t particularly great here. So there was a fantastic session in the morning by Craig, if you haven’t seen it, I highly recommend looking into server-side rendering or pre-rendering.
If you have a website, especially If it’s really large and does a lot of image stuff, you want to make sure that your website is fast for your users, because that’s also benefiting you.
In terms of SEO. for Angular, there’s Angular universal, again, highly recommend watching the video of Craig’s talk.
If you haven’t seen this morning, there are some like things, some trade-offs and some considerations that you need to take into account.
And unfortunately, I don’t have the time to go into the details.
But it does make a difference.
You do need to change your front end code, though.
So if that’s not an option, but you could change something in the back end, there’s a workaround because dynamic rendering, dynamic rendering to boil it down is basically checking at your server what kind of request you’re getting Is this a request from a user’s browser.
In that case, we just served in the regular angle application.
But if this request comes from a crawler, and crawlers tell you by the user agent they’re using, then you can actually serve something different.
You can put your regular front end code into a pre-rendering solution or like a basically a headless browser that creates static HTML version from your content and send that static HTML version to the crawlers.
That also works with all the search engines that do not run JavaScript.
So that’s one solution to do that. And it only requires a change in the server-side rather than your front end code.
But it has downsides.
It’s a workaround, you have the additional infrastructure, you need to make changes to your server, you need to have the pre-rendering solution.
And you do not give your users a faster website.
Because oftentimes, server-side rendering, especially with hydration is faster than clearly client-side rendering.
There’s a bunch of tools that you can use like render Tron is pretty much a ready-made solution that you have to deploy yourself. puppeteer is the thing that allows you to control and have really bad had this chromium.
It’s basically like your do it yourself. Or if you prefer a service, then you can like to go to the people who run pre-render load, for instance,
but it’s a workaround, but if you want to try it out, I highly recommend checking out the render Tron colab that we put together to get a feeling for how that would look like and how that would work.
What’s the workflow for that? That kind of information.
And even more information can also be found on the YouTube video series that we created on the Google Webmasters channel.
There’s like plenty of videos.
There’s one specific episode just for Angular. And if you go to my Twitter, there’s a twin
pair tweets, pinned, tweet, tweet,
pinned, tweet.
That’s the way pin tweet that lets you submit ideas for upcoming videos.
So if you want to see some specific topic being handled, let me know but please watch the videos beforehand, because I have like plenty of submissions and said like, please talk about Angular.
I’m like, yeah, that’s Episode Five.
But yeah, you get the idea.
Other things can go wrong. Sometimes it’s small things that can go wrong.
So here I learned how to fly a paraglider. But I made a tiny mistake with my arms and boom, right?
And that can happen with SEO as well. Sometimes you make a small mistake and things go wrong.
So for instance, what happens if I go to a URL that doesn’t exist?
I get an error page.
That’s good except the HTTP side of things looks different.
The server because it’s a single page application doesn’t know if this URL actually exists or not. It just goes like, yeah, sure, cool. Here you go.
And then the JavaScript figures out, oh, this cat actually doesn’t exist.
And then it fails.
Well, that’s not really good, is it?
Luckily, we are normally pretty good at catching these things.
But not always. Sometimes we just don’t see this and then some something like this happens.
That’s not great either, is it?
So how can you prevent that in a single page application?
Well, one way of presenting it is, once you know that this is an arror, you redirect us to a page that gives us a four or four and says like, this is an arrow, we’re sorry, goodbye.
Because then our crawler will be like, Alright, so this just redirects and then Okay, cool. That’s fine.
The alternative is to use the meta service as I explained earlier on, and tell us not to index this page.
So here we are specifically setting the robots meta tag to no index and that means do not index this page.
And we’re like, okay, cool, fine. You might think well, maybe we can actually put that on on our pages and then only remove it once.
We know that our website is not an arrow, right?
That’ll prevent arrows for sure.
That’s like the bulletproof way of preventing arrows. But I wouldn’t do that.
Why would I not do that, because of the pipeline. Again, we saw that earlier, we take a URL from the queue, we make it get requests, we get the HTML, we look at the HTML, it says no index.
And then we like we don’t gonna render, we’re not going to put it in the index.
So all your pages disappear without JavaScript ever being run. It’s not a good experience.
Also, if you use things like cookies Be very, very careful.
So you have a homepage that deals with a cookie pop up.
And maybe you even have something that like deals with the fact that Googlebot doesn’t click, you still set the cookie for Googlebot somehow.
And then on all other pages, you’re checking that cookie being set.
But that’s actually a problem for Googlebot because Google bot doesn’t persist data across pages, right.
So no local storage, no session storage, no indexed no cookies are being persisted across page loads.
Do not rely on them to be there. And when I say do not rely on things, feature detection is also important.
But you need to make sure that you also handle error ases.
So for instance, here we are checking if the browser supports geolocation if so we load some specific localised content.
And that’s great. Except, except what is this goes wrong?
The browser supports geolocation. But I said no, because there’s a pop up that comes to me, right?
Google bot declines, these pop ups is not going to click yes on it.
It’s not going to allow you to access microphone webcam or location.
So in that case, we don’t have any content, because the browser supports it.
But the user said no, so the page is empty. That’s what’s going to happen with Googlebot as well.
Instead, what you should be doing is you should specify the error handle error callback as well to handle these kind of situations and load fallback content. So this is the better way to do it.
There’s more things like that you should take a look at. We have a troubleshooter guide for these things specifically.
And last but not least, we have tools to help you test things The first one to test stuff is the mobile-friendly test.
It tells you if your page is mobile-friendly de, it’s a name. But it also tells you if your page looks more or less, right, it gives you above the fold screenshots. So you can tell if your content shows up.
It also gives you the actual full rendered HTML that we are looking at for indexing. So you get the full-blown HTML that we have after rendering from it as well.
And you get JavaScript arrows if there were any, so you get a proper JavaScript console in the tool.
But this is specifically you have to start with a URL, you have to put in a URL and then you get this information.
But what if I want to see more what’s happening on my site as a whole on all my pages?
Well, then you can use search console.
Search Console tells you which pages are indexed, how many have errors and which ones have we excluded from the index, and also tells you why we have not indexed something.
So in this case, for instance, it has actually returned to 44.
So maybe this page doesn’t exist.
So if that’s an error, then I want to fix that. You also see how you’re doing How often does your page show up in search results? And how often is it clicked on.
So I can tell that something has gone wrong here. If this was a deployment, I would probably want to know what happened. It recovered afterwards.
So that’s probably good. So maybe we fixed it afterwards or cool.
You can also test pages live like you can put in any URL of your page and figure out if we would index it if we could, or if it has already been indexed.
This one has actually been crawled but not indexed.
Maybe the content on the page is not very good quality.
However, as I said, at I O, we announced the new Google bot updates. testing tools haven’t been updated yet. We’re working on updating them.
So please stay tuned on the webmaster’s block to see once the tools have updated until then, if it works in the tools, you’re absolutely fine.
If it doesn’t work in the tools, you’ll have to figure out if that’s because we are running an old chrome version or not.
But normally things should just work fine if it works fine in the tools.
As I promised you a bunch of
You know, resources.
Here you go. Thank you very much for your attention
and have a fantastic rest of the day. Find me for stickers.