Crawlers and indexing in blogger website settings
Crawling and indexing are two different processes that search engines use to find and store content on the web:
Crawling
Search engines use crawlers, also known as spiders or search engine bots, to find new pages and links on the web. Crawlers analyze the content and code on a page.
Indexing
Search engines use indexing to store, organize, and analyze the content and connections between pages. Once a page is indexed, it can appear in search results.
Here are some ways to manage crawling and indexing for your Blogger blog:
Use a custom robots.txt file
Set a custom robots.txt file for your blog that tells search engine crawlers which pages or files they can request.
Use custom robots header tags
Set custom robots header tags for your blog to set the robots header tags served to search engines.
Use Google Search Console
Use the Google Search Console URL inspection tool to see if your blog is indexed and request indexing if it isn't.
Organize your content
Organize your content so that URLs are logical and easy for humans to understand.
Tell Google about new or updated pages
Let Google know about new or updated pages on your site.
Crawlers Settings in blog website
Steps
1. Go to settings and scroll down to crawlers and indexing
2. Now open new tab and past your URL ending with/robots.txt and search
3. Now copy the text and go to crawlers and indexing in settings again
5. Now enable custom robot header tags
7. Now click archive and search page tags enable noindex and noodp and press SAVE
8. Now click post and page tags enable all and noodp and press SAVE
Thanks for being with us ☺️