Wednesday, 5 April 2023

Meta Robots Tags

 What Are Meta Robots Tags? Meta robots tags (REP tags) are elements of an indexer directive that tell search engine spiders how to crawl and index specific pages on your website. They enable SEO professionals to target individual pages and instruct crawlers on what to follow and what not to follow. Meta Robots Tags Basics You may hide pages from indexing in several ways, including meta robots tags implementation. Here you can use the following directives: all – No limitations for indexing and content demonstration. This directive is being used by default and has no impact on the search engines’ work, unless otherwise specified. noindex – Do not show this page and the ‘Saved Copy’ link in the SERPs. nofollow – Do not allow following the on-page links. none – The same as noindex, and nofollow meta tags. noarchive – Do not show the ‘Saved Copy’ link in the SERPs. nosnippet – Do not show the extended description version of this page in the 

SERPs. BEST PRACTICES FOR SETTING UP META ROBOTS TAGS & ROBOTS.TXT 4 notranslate – Do not offer this page’s translation in the SERPs. noimageindex – Do not index the on-page images. unavailable_after: [RFC-850 date/time] – Do not show this page in the SERPs after specified date/time. Use RFC 850 format. How to Use Meta Robots Tags Meta robots tags are pretty simple to use. It does not take much time to set up meta robots tags. In four simple steps, you can take your website indexation process up a level: 

1. Access the code of a page by pressing CTRL + U. 2. Copy and paste the part of a page’s code into a separate document. 3. Provide step-by-step guidelines to developers using this document. Focus on how, where, and which meta robots tags to inject into the code. 4. Check to make sure the developer has implemented the tags correctly. To do so, I recommend using The Screaming Frog SEO Spider.

Setting up Robots.txt

What Is Robots.txt? Robots.txt is a text file used to instruct search engine bots (also known as crawlers, robots, or spiders) how to crawl and index website pages. Ideally, a robots.txt file is placed in the top-level directory of your website so that robots can access its instructions right away. Why Is Robots.txt Important? Correct robots.txt operation ensures that search engine bots are routed to required pages, disallowing content duplicates that lead to a fall in position. For that reason, you should make sure your site has a thoughtfully created robot.txt file. If a robots.txt file is set up incorrectly, it can cause multiple indexing mistakes. So, every time you start a new SEO campaign, check your robots.txt file with Google’s robots texting tool. Do not forget: If everything is correctly set up, a robots.txt file will speed up the indexing process. Robots.txt on the Web Yet, do not forget that any robots.txt file is publicly available on the web. To access a robots.txt file, simply type: www.websiteexample. com/robots.txt. 

 BEST PRACTICES FOR SETTING UP META ROBOTS TAGS & ROBOTS.TXT 4 This availability means that: You cannot secure or hide any data within it. Bad robots and malicious crawlers can take advantage of a robots. txt file, using it as a detailed map to navigate your most valuable web pages. Also, keep in mind that robots.txt commands are actually directives. This means that search bots can crawl and index your site, even if you instruct them not to. The good news is that most search engines (like Google, Bing, Yahoo, and Yandex) honor robots.txt directives. Robots.txt files definitely have drawbacks. Nonetheless, I strongly recommend you make them an integral part of every SEO campaign. Google recognizes and honors robots.txt directives and, in most cases, having Google under your belt is more than enough. Robots.txt Basics The robots.txt file should: Contain the usual text in the UTF-8 encoding, which consists of records (lines), divided by symbols. Be situated at the root of the website host to which it applies. 

 BEST PRACTICES FOR SETTING UP META ROBOTS TAGS & ROBOTS.TXT 4 Be unique. Contain not more than 1,024 rules. Be under 500KB. Google bots find all the content available for indexing if: There is no robots.txt file. A robots.txt file isn’t shown in the text format. They do not receive the 200 OK response. Note: You can, but are not allowed to, mention the byte order mark (BOM) at the beginning of the robots.txt file, as it will be ignored by bots. The standard recommends the use of a newline before each Useragent directive. If your encoding contains symbols beyond the UTF-8, bots may analyze the file incorrectly. They will execute the valid entry only, ignoring the rest of your content without notifying you about the mistake. 

 BEST PRACTICES FOR SETTING UP META ROBOTS TAGS & ROBOTS.TXT 4 Robots.txt Structure Robots.txt File consists of: One or several User-agent directives, meant for robots of various search engines. Disallow and Allow directives that allow or restrict indexing. Sitemap directives. Disallow directives forbid indexing, Allow directives allow indexing. Each record consists of the directory field (allow, disallow, host or user-agent), two-spot and a value. Empty spaces are not required, but recommended for better readability. You can place comments anywhere in the file and mark them with the # symbol. “#” is the symbol meant for comment descriptions. Google bots do not count everything mentioned between the # symbol and the next newline. The general format is: :<#comment (optional)>. Empty spaces at the beginning and the end will be ignored. Letter case for element does not matter. Letter case might be important for the element, depending on the element. 

 BEST PRACTICES FOR SETTING UP META ROBOTS TAGS & ROBOTS.TXT 4 What to Hide with Robots.txt Obviously, you do not want to show search engines your private technical page, customers’ personal data, and duplicate content. Robots.txt files can be used to exclude certain directories, categories, and pages from search. To that end, use the “disallow” directive. Here are some pages you should hide using a robots.txt file: Pages with duplicate content Pagination pages On-site search pages Dynamic product and service pages Account pages Admin pages Shopping cart Chats Thank-you pages 

 BEST PRACTICES FOR SETTING UP META ROBOTS TAGS & ROBOTS.TXT 4 BEST PRACTICES FOR SETTING UP META ROBOTS TAGS & ROBOTS.TXT 4 How to Use Robots.txt Robots.txt files are pretty flexible and can be used in many ways. Their main benefit, however, is that they enable SEO experts to “allow” or “disallow” multiple pages at once without having to access the code of page by page. Here is an example of how I instruct Googlebot to avoid crawling and indexing all pages related to user accounts, cart, and multiple dynamic pages that are generated when users look for products in the search bar or sort them by price, and so on.

Types of Sitemaps

 XML sitemaps have a couple of limitations: A maximum of 50,000 URLs. An uncompressed file size limit of 50MB. Sitemaps can be compressed using gzip (the file name would become something similar to sitemap.xml.gz) to save bandwidth for your server. But once unzipped, the sitemap still can’t exceed either limit. 

HOW TO USE XML SITEMAPS TO BOOST SEO 3 HOW TO USE XML SITEMAPS TO BOOST SEO 3 Whenever you exceed either limit, you will need to split your URLs across multiple XML sitemaps. Those sitemaps can then be combined into a single XML sitemap index file, often named sitemap-index.xml. Essentially, a sitemap for sitemaps. For exceptionally large websites who want to take a more granular approach, you can also create multiple sitemap index files. For example: sitemap-index-articles.xml sitemap-index-products.xml sitemap-index-categories.xml But be aware that you cannot nest sitemap index files. For search engines to easily find every one of your sitemap files at once, you will want to: Submit your sitemap index(s) to Google Search Console and Bing Webmaster Tools. Specify your sitemap index URL(s) in your robots.txt file. Pointing search engines directly to your sitemap as you welcome them to crawl. You can also submit sitemaps by pinging them to Google. But beware: Google no longer pays attention to hreflang entries in “unverified sitemaps”, which Tom Anthony believes to mean those submitted via the ping URL. 

XML Image Sitemap Image sitemaps were designed to improve the indexation of image content. In modern-day SEO, however, images are embedded within page content, so will be crawled along with the page URL. Moreover, it’s best practice to utilize JSON-LD schema.org/ ImageObject markup to call out image properties to search engines as it provides more attributes than an image XML sitemap. Because of this, an XML image sitemap is unnecessary for most websites. Including an image sitemap would only waste crawl budget. HOW TO USE XML SITEMAPS TO BOOST SEO 3 The exception to this is if images help drive your business, such as a stock photo website or ecommerce site gaining product page sessions from Google Image search. Know that images don’t have to to be on the same domain as your website to be submitted in a sitemap. You can use a CDN as long as it’s verified in Search Console. XML Video Sitemap Similar to images, if videos are critical to your business, submit an XML video sitemap.If not, a video sitemap is unnecessary. Save your crawl budget for the page the video is embedded into, ensuring you markup all videos with JSON-LD as a schema.org/ VideoObject. Google News Sitemap Only sites registered with Google News should use this sitemap. If you are, include articles published in the last two days, up to a limit of 1,000 URLs per sitemap, and update with fresh articles as soon as they’re published. Contrary to some online advice, Google News sitemaps don’t support image URL. Google recommends using schema.org image or og:image to specify your article thumbnail for Google News. 

HOW TO USE XML SITEMAPS TO BOOST SEO 3 Mobile Sitemap This is not needed for most websites. Why? Because Mueller confirmed mobile sitemaps are for feature phone pages only. Not for smartphone-compatibility. So unless you have unique URLs specifically designed for featured phones, a mobile sitemap will be of no benefit. HTML Sitemap XML sitemaps take care of search engine needs. HTML sitemaps were designed to assist human users to find content. The question becomes, if you have a good user experience and well crafted internal links, do you need a HTML sitemap? Check the page views of your HTML sitemap in Google Analytics. Chances are, it’s very low. If not, it’s a good indication that you need to improve your website navigation. HTML sitemaps are generally linked in website footers. Taking link equity from every single page of your website. Ask yourself. Is that the best use of that link equity? Or are you including HTML sitemap as a nod to legacy website best practices? If few humans use it. And search engines don’t need it as you have strong internal linking and an XML sitemap. Does that HTML sitemap have a reason to exist? I would argue no. HOW TO USE XML SITEMAPS TO BOOST SEO 3 Dynamic XML Sitemap Static sitemaps are simple to create using a tool such as Screaming Frog. The problem is, as soon as you create or remove a page, your sitemap is outdated. If you modify the content of a page, the sitemap won’t automatically update the lastmod tag. So unless you love manually creating and uploading sitemaps for every single change, it’s best to avoid static sitemaps. Dynamic XML sitemaps, on the other hand, are automatically update by your server to reflect relevant website changes as they occur. To create a dynamic XML sitemap: Ask you developer to code a custom script, being sure to provide clear specifications Use a dynamic sitemap generator tool Install a plugin for your CMS, for example the Yoast SEO plugin for Wordpress HOW TO USE XML SITEMAPS TO BOOST SEO 3 Key Takeaway Dynamic XML sitemaps and a sitemap index are modern best practice. Mobile and HTML sitemaps are not. Use image, video and Google News sitemaps only if improved indexation of these content types drive your KPIs. 

HOW TO USE XML SITEMAPS TO BOOST SEO 3 XML Sitemap Indexation Optimization Now for the fun part. How do you use XML sitemaps to drive SEO KPIs. Only Include SEO Relevant Pages in XML Sitemaps An XML sitemap is a list of pages you recommend to be crawled, which isn’t necessarily every page of your website. A search spider arrives at your website with an “allowance” for how many pages it will crawl. The XML sitemap indicates you consider the included URLs to be more important than those that aren’t blocked but aren’t in the sitemap. You are using it to tell search engines “I’d really appreciate it if you’d focus on these URLs in particular”. Essentially, it helps you use crawl budget effectively. By including only SEO relevant pages, you help search engines crawl your site more intelligently in order to reap the benefits of better indexation. 

HOW TO USE XML SITEMAPS TO BOOST SEO 3 You should exclude: Non-canonical pages. Duplicate pages. Paginated pages. Parameter or session ID based URLs. Site search result pages. Reply to comment URLs. Share via email URLs. URLs created by filtering that are unnecessary for SEO. Archive pages. Any redirections (3xx), missing pages (4xx) or server error pages (5xx). Pages blocked by robots.txt. Pages with noindex. Resource pages accessible by a lead gen form (e.g. white paper PDFs). Utility pages that are useful to users, but not intended to be landing pages (login page, contact us, privacy policy, account pages, etc.). 

HOW TO USE XML SITEMAPS TO BOOST SEO 3 I want to share an example from Michael Cottam about prioritising pages: Say your website has 1,000 pages. 475 of those 1,000 pages are SEO relevant content. You highlight those 475 pages in an XML sitemap, essentially asking Google to deprioritize indexing the remainder. 

HOW TO USE XML SITEMAPS TO BOOST SEO 3 Now, let’s say Google crawls those 475 pages, and algorithmically decides that 175 are “A” grade, 200 are “B+”, and 100 “B” or “B-”. That’s a strong average grade, and probably indicates a quality website to which to send users. Contrast that against submitting all 1,000 pages via the XML sitemap. Now, Google looks at the 1,000 pages you say are SEO relevant content, and sees over 50 percent are “D” or “F” pages. Your average grade isn’t looking so good anymore and that may harm your organic sessions. But remember, Google is going to use your XML sitemap only as a clue to what’s important on your site. Just because it’s not in your XML sitemap doesn’t necessarily mean that Google won’t index those pages. When it comes to SEO, overall site quality is a key factor. To assess the quality of your site, turn to the sitemap related reporting in Google Search Console (GSC). Key Takeaway Manage crawl budget by limiting XML sitemap URLs only to SEO relevant pages and invest time to reduce the number of low quality pages on your website. HOW 

TO USE XML SITEMAPS TO BOOST SEO 3 Fully Leverage Sitemap Reporting The sitemaps section in the new Google Search Console is not as data rich as what was previously offered. It’s primary use now is to confirm your sitemap index has been successfully submitted. If you have chosen to use descriptive naming conventions, rather than numeric, you can also get a feel for the number of different types of SEO pages that have been “discovered” - aka all URLs found by Google via sitemaps as well as other methods such as following links. In the new GSC, the more valuable area for SEOs in regard to sitemaps is the Index Coverage report. The report will default to “All known pages”. Here you can: Address any “Error” or “Valid with warnings” issues. 

These often stem from conflicting robots directives. One solved, be sure to validate your fix via the Coverage report. Look at indexation trends. Most sites are continually adding valuable content, so “Valid” pages (aka those indexed by Google) should steadily increase. Understand the cause of any dramatic changes. Select “Valid” and look in details for the type “Indexed, not submitted in sitemap”. These are pages where you and Google disagree on their value. For example, you may not have submitted your privacy policy URL, but Google has indexed the page. In such cases, there’s no actions to be taken. What you need to be looking out for are indexed URLs which stem from poor pagination handling, poor parameter handling, duplicate content or pages being accidently left out of sitemaps. 

HOW TO USE XML SITEMAPS TO BOOST SEO 3 Afterwards, limit the report to the SEO relevant URLs you have included in your sitemap by changing the drop down to “All submitted pages”. Then check the details of all “Excluded” pages. Reasons for exclusion of sitemap URLs can be put into four action groups: 

1. Quick wins: For duplicate content, canoncials, robots directives, 40X HTTP status codes, redirects or legalities exclusions put in place the appropriate fix. 

2. Investigate page: For both “Submitted URL dropped” and “Crawl anomaly” exclusions investigate further by using the Fetch as Google tool. 

3.Improve page: For “Crawled - currently not indexed” pages, review the page (or page type as generally it will be many URLs of a similar breed) content and internal links. Chances are, it’s suffering from thin content, unoriginal content or is orphaned. 

4. Improve domain: For “Discovered - currently not indexed” pages, Google notes the typical reason for exclusion as they “tried to crawl the URL but the site was overloaded”. 

Don’t be fooled. It’s more likely that Google decided “it’s not worth the effort” to crawl due to poor internal linking or low content quality seen from the domain. If you see a larger number of these exclusions, review the SEO value of the page (or page types) you have submitted via sitemaps, focus on optimising crawl budget as well as review your information architecture, including parameters, from both an link and content perspective.

 HOW TO USE XML SITEMAPS TO BOOST SEO 3 Whatever your plan of action, be sure to note down benchmark KPIs. The most useful metric to assess the impact of sitemap optimisation efforts is the “All submitted pages” indexation rate - calculated by taking the percentage of valid pages out of total discovered URLs. Work to get this above 80 percent. Why not to 100 percent? Because if you have focussed all your energy on ensuring every SEO relevant URL you currently have is indexed, you likely missed opportunities to expand your content coverage. Key Takeaway In addition to identifying warnings and errors, you can use the Index Coverage report as an XML sitemap sleuthing tool to isolate indexation problems. Note: If you are a larger website who has chosen to break their site down into multiple sitemap indexes, you will be able to filter by those indexes. This will not only allow you to: 1. See the overview chart on a more granular level. 2. See a larger number of relevant examples when investigating a type of exclusion. 3. Tackle indexation rate optimisation section by section. 

HOW TO USE XML SITEMAPS TO BOOST SEO 3 XML Sitemap Best Practice Checklist Do invest time to: Compress sitemap files using gzip Use a sitemap index file Use image, video and Google news sitemaps only if indexation drives your KPIs Dynamically generate XML sitemaps Ensure URLs are included only in a single sitemap Reference sitemap index URL(s) in robots.txt Submit sitemap index to both Google Search Console and Bing Webmaster Tools Include only SEO relevant pages in XML sitemaps Fix all errors and warnings Analyse trends and types of valid pages Calculate submitted pages indexation rate Address causes of exclusion for submitted pages Now, go check your own sitemap and make sure you’re doing it right. HOW TO USE XML SITEMAPS TO BOOST SEO

SEO-friendly URL structure

 An SEO-friendly URL structure is the following things:

Easy to read: Users and search engines should be able to

understand what is on each page just by looking at the URL.

Keyword-rich: Keywords still matter and your target queries

should be within URLs. Just be wary of overkill; extending URLs just

to include more keywords is a bad idea.

Consistent: There are multiple ways to create an SEO-friendly

URL structure on any site. It’s essential that, whatever logic you

choose to follow, it is applied consistently across the site.

Static: Dynamic parameters are rarely an SEO’s best friend, but

they are quite common. Where possible, find a solution that allows

your site to render static URLs instead.

Future-proof: Think ahead when planning your site structure. You

should minimize the number of redirects on your domain, and it’s

easier to do this if you don’t require wholesale changes to URLs.

Comprehensive: Use the concepts of main content and

supplementary content to ensure you have adequate coverage for

all relevant topics. This will maximize your site’s visibility.

THE ULTIMATE GUIDE FOR AN SEO-FRIENDLY URL STRUCTURE

2

Supported by data: It normally requires buy-in from a lot of

stakeholders to launch or update a particular site structure.

Numbers talk, so make use of search and analytics data to support

your case.

Submitted to search engines: Finally, create an XML sitemap

containing all of the URLs that you want to rank via SEO and

submit it to search engines. That will ensure all your hard work

gets the reward it deserves.

THE ULTIMATE GUIDE FOR AN SEO-FRIENDLY URL

Monday, 3 April 2023

Healthcare innovation

 Digital Health Innovation Is Hard. How Can Data Help?

Four Key Takeaways

Healthcare innovation’s greatest promise is its potential to solve a stunning range of problems.

1. The cloud bolsters security

As a physician with three autoimmune disorders, Suhina Singh heard plenty of talk about patient empowerment but saw few results. She founded Jonda Health, a startup that launched this past summer with the goal of enabling patients to conveniently access, control, and leverage their personal health data.


Jonda found a niche in serving patients with rare diseases and moderate to severe chronic conditions. To stand out among the crowded health app marketplace, Singh drew on the data and her intimate knowledge of the patient experience. That led her to pursue strong security protections and intensive data-processing capabilities — making her customers’ lives simpler and safer.

They may vary from a localized hepatitis C outbreak and the absence of insights in virtual care encounters to patient data access barriers and inadequate brain health measurements. Where else but in digital health can Fortune 500 organizations and stealth-mode startups alike leverage similar technologies to address such unique challenges?


InterSystems recently assembled a diverse set of innovators to discuss how they use healthy data, actionable insights, and proven technology partnerships to achieve patient health and business goals. Here are four key takeaways.

2. Ensure access to innovation

Cognetivity developed a fast, repeatable, reliable way to evaluate brain health, filling a long-standing unmet need that delayed treatments. The path to market for the startup encompassed research and development, pilots, fundraising, clinical trials, regulatory approvals, and high-stakes deals.


Along the way, Cognetivity realized access — to data, informed decision making, and medical advances — was everything. The company required seamless data retrieval, while clinicians needed easy access to the technology and its resultant insights. But most of all, the new brain health assessment had to be available to patients, including those who face financial and other barriers to testing.

3. Pursue insights, not widgets

Data is the heart of a plan by Baxter International, a global leader in medical technology, to improve nursing workflows and reduce medical errors through clinical decision support. With a footprint spanning every care setting — including patients’ homes — the company needed to ingest and leverage information in each venue.


Baxter’s leaders found that success depended on, above all else, actionable insights at the point of care. The team laser-focused on integrating health data, which put it on track to release a commercially viable product quickly.


4. Don’t try to solve everything

If anyone knows laboratories, it’s the Rhodes Group. For more than 20 years, the company has extracted value from clinical lab data, helping their clients operate and collect revenues. Its analytics solutions fuel risk adjustment profiles for patients, identify undocumented diagnoses, and even help fight hepatitis C.


Rhodes Group CEO James Brown says their accomplishments stem from sticking to what his team knows best: lab data.


“No one can solve healthcare,” he adds. “But what gets us excited is that we can do our part.”







Tuesday, 26 April 2022

News

Followers

Home | About us | Disclaimer | Privacy Policy | Contact us