logo vmapp.org

@Sherry384

Sherry384

Last seen: Tue 11 May, 2021

Signature:

Recent posts

 query : Re: Does constantly doing small edits to a website affect its SEO? Are there SEO reasons to not keep a page static for several years even though it doesn't really need to be changed content wise?

@Sherry384

I am sorry, but your friend is generally wrong on his point that small changes is good for search performance. This is, simply, not how search works.

Here are a few points to consider.

Templated Content v.s. Content

Google and other search engines can easily compare more than one page to determine what HTML elements are templated, where and how these elements are used such as header, sidebar, and footer, and the relative value of the templated content.

In this, Google and other search engines evaluate actual content separate from templated content. Why? Because search engine matches to, for example, sidebar elements poisoned the SERPs with poor results. It is that simple. This was a real problem at one point. For search performance, short of a few additional HTML elements, only the content is considered for search.

How content is evaluated.

There are several semantic analysis techniques used to analyze the content one of which is topical analysis. I have talked about this elsewhere, however, for completeness, I will go over it briefly again.

Google does not make keyword matches. Why? Because many terms can be used in more than one context. For example, the term dog can be an animal, the poor performance of something, a description of a persons attractiveness, etc. Humans can put terms into context instinctively, however, computers cannot. Semantic analysis is used to understand the written word. Semantics has existed since at least the 1970s and follows how people think and communicate. It is intended to accurately understand meaning.

One of the techniques is topical analysis. Each term, where it applies, is assigned a topic category or sub-category from an ontology. Where an ambiguous term is used, disambiguation techniques solve this problem. Each logical content block is evaluated in order and evaluated for importance. The entire content, each header, paragraph, sentence, and term is given a score that very accurately defines the topic from the largest possible meaning to the smallest possible meaning using an hierarchy. For example, a carburetor is a car part. A car part would be a sub-category of a car. While this is an overly simplistic example, this example illustrates how a term fits within a hierarchy that allows for narrow or broader meaning.

This and other semantic analysis scores of the various content blocks is put into graphs typically referred to as a matrix. Content blocks in relationship with each other are overlapped using matrices of matrices. For example, this means that a header topically supports the paragraph and paragraphs that immediately follow.

These scores are what is stored within the index. Google is both clear and ambiguous as to what semantic methods are used within its index. However, Google is up front with much of what is used.

Search queries are also evaluated using semantics.

Google is clear that semantic analysis is used to evaluate a search query. Shorter queries are ambiguous while longer queries that allow for more semantic understanding are far clearer and yield far better results. The same analysis is applied to the search query as is to the content. This allows for unambiguous search queries to be closely matched to semantic scoring within the index. Where people think of keyword matches, it is the topical analysis that primarily drives search query matches.

So what does a few simple changes to content do?

Almost nothing. Any minor change to content would not or would barely move the needle as far as semantic scoring is concerned. It is that simple. Google has the opportunity to compare the changes found between the cached and fetched page. Minor changes where the content does not make a significant change in the scoring are not seen as fresh. Again, simple. In cases where minor changes are made, the index is updated, however, the net gain otherwise is little to nothing. For content to be consider fresh, the content must change. Not the formatting, not minor changes, but measurable change.

10% popularity Vote Up Vote Down


Report

 query : Re: Does context, of a Source Page, influence Link Juice/Weighting to its Target Page? I understand that obtaining a link from an authoritative website, typically carries greater weighting than a link

@Sherry384

PageRank is not exactly what most sites tell you it is. PageRank is based upon the trust network model where one entity trusts another by use of certificates and prior knowledge. If another entity says trust me, without prior knowledge and an appropriate certificate, the new entity cannot be trusted. While this is not immediately recognizable when it comes to links, after all, links between sites do not have prior knowledge or certificates, the rest of the trust model made sense in establishing the link vote model used in PageRank. However, there are problems.

For example, site A trusts site B and creates a link. Then site B rusts and creates a link to site C. Following the trust network model, it is assumed that because of prior knowledge and certificates, site A therefore trusts site C. That would make sense, however, it does not for links. Keep this in mind.

Now consider that rank passed is circular. For example, Bob writes a check to Chuck who writes a check to Fred who writes a check to Bob. Depending upon the values of the checks, who says who gets what? This means that while a vote is passed using links and authority established, it gets confusing rather fast as to who gets what. This is one reason why earlier rank algorithms were recursive.

What is often forgotten is that links too must have value other than one vote and sites with high authority will pass far too much value creating a radical curve in the algorithm. There are two fixes for this.

Authority is capped. High authority sites can only pass so much value through the algorithm as not to pass too much value that it blows up the system.

As well, links are evaluated for quality and value. This means that certain criteria is applied to the link before any value is passed. Each link is evaluated and assigned values between .9 and 0 so that along with the authority cap, only so much of a sites PageRank is passed. This creates a more natural curve in the algorithm so that no super authority skews the whole system. Of this, the value of the link consists of the semantic meaning of the link, the location of the link, the source page value, etc. Placement within the page is very important along with the semantic meaning of the link. For example, a link within the navigation signals that the pages linked to are of the highest value for a site. As well, links in the footer are of less value than a sidebar which is less than the navigational links. Links within content are also of high value and often the best place for outgoing links. For anyone who wants a link to their site, a link within the content is best.

But wait! There's more!! Links higher up within the content are more valuable than links in the middle. But do not think that links at the end of the content have less value than anywhere else. This is because content does not decrease in value in a linear way. The last paragraph is often nearly as important as the first. Who wudda thunk it? As well, consider the semantic value of the content block surrounding the link and how it's evaluation matches the link text and the target page content. Relevancy is through the entire chain is important.

Also consider the number of links on a page. While on the surface this should not be an issue, many links to the same target or many links to too many similar targets or with too many with similar link text also enter the equation. The overall quality of the link itself is a consideration.

Oh and let's not forget citations (quote or mention) with coocurrence (who are you mentioned with?).

All of these things are factors in assessing link values and is a multiplier of the remaining PageRank to be passed.

This means that a page with PR6 and two links does not pass PR3 per link. It just does not work that way. Remember Bob, Chuck, and Fred? As PageRank is passed, it needs to be calculated in a recursive fashion until the results are statistically insignificant. But that is not exactly how things work today or at least all the time. Because PR is based upon the trust network model, a value can be added between links and a proximity calculation much like in routing network packets through least costly routes on the Internet can be used to calculate PR. This required a fairly complete indexing of the web to work. (This is a hint on how confident Google is at their index completeness by the way.) So today a fair approximation of PR can be calculated immediately using a standard calculation taken directly from the trust network model.

Looking at your example, it is not a given that a higher authority page will pass more value for a few reasons: one, authority caps; and two, placement and relevancy of the content block where the link exists and the link and target page, but also the value of the link text itself, and the overall quality of the link. Clear as mud?

10% popularity Vote Up Vote Down


Report

 query : Use robots.txt to allow Google to crawl images but not index them I have problem with on page optimization. I checked my site to see if it is compatibility for mobile devices with Google's

@Sherry384

Posted in: #Images #Indexing #RobotsTxt #Seo

I have problem with on page optimization. I checked my site to see if it is compatibility for mobile devices with Google's tool.
It showed me that page speed was low. Google also says it cannot access or crawl images because I blocked it with robots.txt.

In robots.txt I have Disallow: /wp-content/. In /wp-content/ I have all the images for the site. But when I make this folder allow: Google indexed my images separately. When I use the site search site:buhehe.de it shows that Google views each image as a separate page.

What is the best practice? How do I configure robots.txt for images?
For example, here is a page with images: buhehe.de/itunes-musik-auf-iphone-compressor/

10% popularity Vote Up Vote Down


Report

 query : Re: Can I mix Microdata and JSON-LD on the same page for different entity My website is using JSON-LD and Microdata. For example, in BreadcrumbList, I have used Microdata format, and for others

@Sherry384

It should be fine to use different syntaxes on the same page.

It has one drawback, though: If you want to connect entities specified in different syntaxes, you can’t nest them. You have to use URIs instead.

Drawback example

You can connect a BreadcrumbList to a WebPage with the breadcrumb property.

When using only one syntax, you can simply nest the items:

<!-- Microdata only -->
<div itemscope itemtype="http://schema.org/WebPage">
<div itemprop="breadcrumb" itemscope itemtype="http://schema.org/BreadcrumbList">
</div>
</div>


<!-- JSON-LD only -->
<script type="application/ld+json">
{
"@context": "http://schema.org",
"@type": "WebPage",
"breadcrumb":
{
"@type": "BreadcrumbList"
}
}
</script>


But if you mix syntaxes, you have to use URIs instead:

<!-- Microdata, giving the entitiy an URI with the 'itemid' attribute -->
<div itemscope itemtype="http://schema.org/BreadcrumbList" itemid="#page-breadcrumbs">
</div>

<!-- JSON-LD, referencing the URI "#page-breadcrumbs" which is specified in the Microdata -->
<script type="application/ld+json">
{
"@context": "http://schema.org",
"@type": "WebPage",
"breadcrumb":
{
"@type": "BreadcrumbList",
"@id": "#page-breadcrumbs"
}
}
</script>

10% popularity Vote Up Vote Down


Report

 query : Re: Do AdWords conversions only track AdWords visitors? I have set up an AdWords conversion and put the conversion tracking JS code on a test page. However, I don't see any conversions tracked when

@Sherry384

Both AdWords and Google Analytics can record conversions using either destination URL page view or event tracking.

Goals (conversions) inside Google Analytics- can use event tracking javascript code by editing the website- and adding in the extra JS code.
Using Google Tag Manager to deploy the Google Analytics event tracking is easier than editing the code.

10% popularity Vote Up Vote Down


Report

 query : Is it possible to serve static content of my SPA for SEO Purpose. Is it possible to serve static content of my SPA for SEO Purpose. I have designed website in Angular4 and it is working

@Sherry384

Posted in: #AngularJs #Nginx #Seo #SinglePageApplication

Is it possible to serve static content of my SPA for SEO Purpose.

I have designed website in Angular4 and it is working fine. But I have used Material design and other libs which are not compatible with Angular Universal, so I cant use Angular Universal for my site.

But is it possible to serve pre generated static pages of SPA to SEO Crawler by configuring nginx reverse proxy? There is a headless chrome version available and using webdrive and python I can generated new content automatically on periodically.

Please let me know if any other alternative is possible ?

10% popularity Vote Up Vote Down


Report

 query : Re: Can I have multiple AdSense accounts? Is it possible for me to have more than one AdSense account, or would that violate Google's terms of service in some way?

@Sherry384

Yes you can. If address is different and other details like bank account are different you can access multiple accounts but you can't all accounts in website or blog.

10% popularity Vote Up Vote Down


Report

 query : Re: How to describe pixel width of a VideoObject or MediaObject in Schema.org and JSON-LD? Using Schema.org vocabulary in JSON-LD, when describing the height and width dimensions of a VideoObject,

@Sherry384

The height and width properties expect either a Distance or a QuantitativeValue value.

Both of your corresponding examples are correct:




Distance value:

"width": "100 px"

QuantitativeValue value (E37 is the UN/CEFACT Common Code for "pixel"):

"width": {
"@type": "QuantitativeValue",
"unitCode": "E37",
"value": "100"
}



For a consumer that supports both ways, these should be equivalent.

Possible risk when using a Distance value: a consumer might expect the unit to be written in a different way than px (e.g., px., pixel etc.). I guess there is no standard abbreviation (at least the Wikipedia article Pixel doesn’t specify one). You don’t have this risk with a QuantitativeValue value, because the units are standardized in the UN/CEFACT Common Code.

10% popularity Vote Up Vote Down


Report

 query : Can my Google Analytics data be recovered? When I sign in, Google doesn't have it and is asking me to sign up for GA I use google analytics. That is working without any problem. But today,

@Sherry384

Posted in: #GoogleAnalytics

I use google analytics. That is working without any problem. But today, when I log in to my account, google-analytics ask me to SIGN UP!
All my site information is missing, Without Google giving me a warning.
What is the solution to recover them?
Thanks for your help.

10.01% popularity Vote Up Vote Down


Report

 query : How to index pages with tabs for SEO I have a page with five tabs. The page will load with the first tab opened. The remaining tabs are only visible by clicking the respective tabs; these

@Sherry384

Posted in: #Indexing #Seo #TabbedContent

I have a page with five tabs. The page will load with the first tab opened. The remaining tabs are only visible by clicking the respective tabs; these contents are not loaded with AJAX and are available on page load. But Google does not index the content of the hidden tabs.

I want to know if the following approach will index the table contents. I can create append the URL with an additional parameter with the tab id; a JavaScript logic will check for such a parameter on page load and show/hide accordingly. This way there would be individual URLs that can be given to Google for indexing.

I am not sure if this will work, any comments/suggestion is appreciated.

10.02% popularity Vote Up Vote Down


Report

 query : Google Structured Data Testing Tool is repeatedly warning me about failing to include mainEntityOfPage in JSON-LD Have I misunderstood what mainEntityOfPage represents in schema.org? I thought it

@Sherry384

Posted in: #GoogleSearchConsole #JsonLd #SchemaOrg #StructuredData

Have I misunderstood what mainEntityOfPage represents in schema.org?

I thought it told user-agents what the parent page of a given entity is, if the entity being described by structured data is the main entity of a given page.

But every time I mark up in JSON-LD an entity which is not the main entity, Google Structured Data Testing Tool warns me:


The mainEntityOfPage field is recommended. Please provide a value if
available.


So I get multiple repeated warnings recommending that each non-main entity should have a mainEntityOfPage entry.

Which... makes no sense at all.

Unless I have misunderstood what mainEntityOfPage represents in schema.org.

Am I allowed to include something like:

"mainEntityOfPage": "false"




Example:

<body>
<h1>I am a Webpage</h1>
<p>I am an introduction to the webpage.</p>

<article>
<h2>Article One</h2>
<p>As an article I am very important. In fact, I am the Main Entity of this Webpage</p>
</article>

<article>
<h2>Article Two</h2>
<p>I am another article on this Webpage, but I am not as important as Article One</p>
</article>
</body>


When I add structured data in JSON-LD about Article Two, I see the error:


The mainEntityOfPage field is recommended. Please provide a value if
available.


which (I think?) is saying - you need to tell me: Article Two is the Main Entity of... what page?

But... Article Two isn't the Main Entity of anything. It is a less important entity of the webpage for which Article One is the Main Entity.

10.01% popularity Vote Up Vote Down


Report

 query : Re: Google Structured Data Testing Tool Repeat Error: "The URL could not be rendered. Some markup may be missing." When using Google's Structured Data Testing Tool, I consistently get the error:

@Sherry384

After:


isolating error triggers
assessing what all the error triggers had in common (they all referred to external .svg files)


The answer is:

Google Structured Data Testing Tool doesn't yet know how to process (or just ignore) references to SVG files.

10% popularity Vote Up Vote Down


Report

 query : Re: Does using too many heading tags (h1s, h2s, and h3s) cause SEO problems? On my website, I utilize many h1, h2, and h3 tags. I am most worried over the fact that I utilize the h1 tag way

@Sherry384

Yes. h1s are meant to be the main title of your page (article). Instead of using more than one h1, take advantage of the strong element. Search bots are like humans today, and using more than one h1, is like making each screaming for attention.

10% popularity Vote Up Vote Down


Report

 query : Re: How can I submit a URL to Google's search engine without having a Google account? How can I notify Google's search engine of the existence of a website without opening an account with that

@Sherry384

As explained in Google’s Introduction to Indexing, you don’t have to submit your URLs in order to get them crawled. If you can get links to your new pages from pages already indexed, Google’s crawlers will find the new URLs the next time they crawl the indexed pages again.

There might be an alternative that can be used without Google account: pinging Google with the URL to your sitemap:


Ping Google specifically with the location of the sitemap: (http://www.google.com/ping?sitemap=URL/of/file)


However, this is documented in a context for notifying Google about changes to your structured data, and it’s not mentioned in the section Make your sitemap available to Google (Submit your sitemap to Google) on a different page, so it’s not clear if Google will accept new sitemaps this way.

10% popularity Vote Up Vote Down


Report

 query : Re: How to use JSON-LD in my page that already have content? I have a website that I created just to test my CSS, JS and PHP skills. I have written data in simple HTML tags. Now I heard about

@Sherry384

Nothing wrong with Microdata or RDFa

First of all, ignore Google’s recommendation if you are fine with using Microdata or RDFa. I think there are only two cases where you should use JSON-LD instead of Microdata/RDFa:


If you have to add structured data with client-side JavaScript.
If it’s too hard for you to add Microdata’s/RDFa’s attributes to your existing HTML elements.


In all other cases, it should be preferable to use Microdata or RDFa (comparison) instead of JSON-LD, because you don’t have to duplicate your content.

If I understand your case correctly, you don’t seem to use JavaScript to add it, and it seems that it would be easy for you to add the attributes to your existing HTML elements (even easier than adding a separate script element), so I would recommend to go with RDFa.

How to add JSON-LD

You don’t touch your existing HTML/content. Instead, you


add a script element somewhere (can be in the head or the body), and
add your structured data to it (thereby duplicating your content).


If you don’t want to use some JSON-LD PHP library for this purpose, you could generate the JSON-LD similar to how you generate your HTML.

So if you have this somewhere on your page to show the name of a person

<div>
<p>Name: <?php echo $Name; ?></p>
</div>


you could add this to your JSON-LD script to add this person and its name

<script type="application/ld+json">
{
"@context": "http://schema.org",
"@type": "Person",
"name": "<?php echo $Name; ?>"
}
</script>


For comparison: How to add Microdata

<div itemscope itemtype="http://schema.org/Person">
<p>Name: <span itemprop="name"><?php echo $Name; ?></span></p>
</div>


For comparison: How to add RDFa

<div typeof="schema:Person">
<p>Name: <span property="schema:name"><?php echo $Name; ?></span></p>
</div>

10% popularity Vote Up Vote Down


Report

 query : How can I encourage the browser to cache the HTML5 poster image? I have a web page with a series of videos embedded on the page: <video controls preload="metadata" poster="/path/to/video1/poster-image-1.jpg">

@Sherry384

Posted in: #Cache #Html5 #PageSpeed #Performance #Video

I have a web page with a series of videos embedded on the page:

<video controls preload="metadata" poster="/path/to/video1/poster-image-1.jpg">
<video controls preload="metadata" poster="/path/to/video2/poster-image-2.jpg">
<video controls preload="metadata" poster="/path/to/video3/poster-image-3.jpg">


After I added the videos, the page load slowed down quite a lot, so I set about trying to ascertain the bottleneck.

The Network tab on Firefox Developer Tools reveals that each of the poster images is returning 206 Partial Content and the poster images are always being downloaded from the server, never from the local browser cache - and it's this that is slowing down the page by 2-3 seconds.

How can I ensure that the <video> poster images are retrieved from the local browser cache instead?



Is the only solution to have a series of invisible images elsewhere on the page, like this:

.preload {width: 1px; height: 1px; opacity: 0;}

<img class="preload" src="/path/to/video1/poster-image-1.jpg" alt="" />
<img class="preload" src="/path/to/video2/poster-image-2.jpg" alt="" />
<img class="preload" src="/path/to/video3/poster-image-3.jpg" alt="" />

10.01% popularity Vote Up Vote Down


Report

 query : Any problems with including a unique identifier in bulk email from address? I understand that VERP allows you to encode the recipient's email address into the sending email address and we can

@Sherry384

Posted in: #BulkEmail #Email

I understand that VERP allows you to encode the recipient's email address into the sending email address and we can use this to identify the recipient in bounce management system.

But if I instead want to use a unique identifier to identify a specific message rather than an email address which I may have sent multiple messages to is there anything wrong with me using a from address of something like bounce.123456789@example.com (where 123456789 is my unique message identifier)?

Does using the VERP method offer any other benefits (for example, do mail servers do something differently if the VERP method is used)?

If I send bulk messages (tens of thousands or more) will receiving mail servers have a problem with a unique from address?

I also plan to set the reply-to address to reply.123456789@example.com so I can treat human replies differently. Is there likely to be any problems with this? I understand that technically some receiving mail servers may send bounces to the reply-to address, but is this likely to happen in the real world? And are any human replies likely to go to the from (bounce) address if a reply-to is present?

Presumably auto responders will go to the reply-to address too?

10% popularity Vote Up Vote Down


Report

 query : Can I run Nginx and Node.js on same dedicated server? I have one dedicated server and I do have more than 1 IP address. Could I run Nginx, Apache, LIGHTTPD and Node.js simultaneously? I

@Sherry384

Posted in: #Webserver

I have one dedicated server and I do have more than 1 IP address.

Could I run Nginx, Apache, LIGHTTPD and Node.js simultaneously?

I want to make tests, learn.

10.01% popularity Vote Up Vote Down


Report

 query : Re: SEO for websites with similar content? One of my clients has four childcare websites with all similar content. Some content has been copied over to describe characteristics of their childcare

@Sherry384

If there is one owner owning different childcare centres and if that is okay for the audience to know that all four are connected if there is no legal or any other issue surrounding that, then the best option would be to create one common platform say:
oliverchildcare.com and promote all four with different names through same channel.

like oliverchildcare.com/centres/centre1 , oliverchildcare.com/centres/centre2 . . .

cross promoting different centres would have its own advantages attached with it, as per customer experience as well as SEO.

If you can lay your product like above, nothing like it.
Then if you want to promote a service, you just need to create one common source and can add which centres provide this particular service at what timings.

But if you are putting common content across all 4 channels and want all of them to index and rank, it would be a very tedious task and might end up with none of them indexing.

But if you can go ahead and give one of them wightage, then there are multiple ways of doing so, one of them being putting canonical from all others to one primary domain.

I would recommend creating a common one and promoting all there, if this belongs to one common client/entity/organisation and is more of multiple centres for same type of activity rather than different services all together

10% popularity Vote Up Vote Down


Report

 query : Re: Usage of comma in URL: encoded or not encoded Even have seen a page, where in the same url were both of enocoded and not encoded commas, like: https://example.com/product?filter_color:blue,green&filter_size:xl%2Cxxl

@Sherry384

, is a reserved character. Reserved characters are never equivalent (for normalization purposes) to their percent-encoded variants. So these URIs are not equivalent:
example.com/?foo,bar
example.com/?foo%2Cbar

Neither the URI standard¹ nor the HTTP/HTTPS URI scheme specs define a special role for , in the query component. This means that authors may use , to represent data in the query component (i.e., for whatever they want).

It can make sense to use , together with %2C in an URI’s query component. For example, an author could decide to use , for separating name-value pairs, and %2C for representing commas within values:
example.com/?score:1%2C4,time:55

(It doesn’t seem to make sense in the example URI in your question, though. Assuming that the values are "blue" and "green", as well as "xl" and "xxl", it would make more sense to either use , or %2C in both cases. Your example URI would make sense if e.g. the latter case is actually one value, so "xl,xxl".)



¹ Note that RFC 2396 is obsolete. IETF’s URI standard should always be accessible under STD 66, which is currently RFC 3986.

I gave a similar answer to the question Possible side effect using comma in querystring? on Stack Overflow.

10% popularity Vote Up Vote Down


Report

 query : Re: Using alternate+hreflang to link to pages with different content (e.g., Guardian’s country-specific editions)? On the Guardian.co.uk homepage they have: <link rel="alternate" href="https://www.theguardian.com/uk"

@Sherry384

Yes, this doesn’t seem to be correct.

HTML defines that alternate+hreflang is for translations:


If the alternate keyword is used with the hreflang attribute, and that attribute’s value differs from the root element’s language, it indicates that the referenced document is a translation.


To be clear, the problem is not that the hreflang attribute is used (it’s correct to use it for such links), the problem is that the alternate link type is used in addition.



Search engines might of course have their own guidelines that don’t necessarily comply with the HTML specification. So authors might decide to ignore the HTML spec.

One of the three cases where Google recommends alternate+hreflang is:


Your content has small regional variations with similar content in a single language. For example, you might have English-language content targeted to the US, GB, and Ireland.


They don’t go into more detail about this case, so it’s not clear how exactly they understand "variations" and "similar", but I think cases like Guardian’s different editions are certainly not "small" variations.

10% popularity Vote Up Vote Down


Report

 query : Re: Should a multi-type entity (local business that provides a service) be contained in "provider"? Which is considered best practice for writing multi-typed entity in JSON-LD for a local business

@Sherry384

An MTE is one entity with multiple types.

Your first JSON-LD snippet describes an MTE, because the top-level object has two types (LocalBusiness and Service).

Your second JSON-LD snippet doesn’t describe an MTE, because no object has more than one type.

If you think it makes sense that the business is the very same thing as the service this business provides, then you can use both types together. But usually these are of course separate things (and separating them allows you to be more precise and flexible), and you would connect them via the provider property and/or the hasOfferCatalog property.

10% popularity Vote Up Vote Down


Report

 query : How to safely test SEO impact of rearranged content I'm currently managing a website for a client which has a high position in the search engines and also relies very much on that high position.

@Sherry384

Posted in: #Ranking #SearchEngines #Seo #Testing

I'm currently managing a website for a client which has a high position in the search engines and also relies very much on that high position. About 80% of traffic comes from them.

Currently the most important pages are structured like so:

Caroussel

Heading 1
Paragraph
Heading 2
Paragraph
Heading 2
Paragraph

Product filter
Product gallery


The products are below the fold. To improve the UX of the pages I would like to move the products above the text content.

How can I test if this change will negatively influence the position in search engines without hurting my search ranking?

10.01% popularity Vote Up Vote Down


Report

 query : Re: Are header navigation menu links considered to be internal links by Google? Are header navigation menu links, which occur on every page, considered to be internal links by Google, or would it

@Sherry384

Every link that points to the same domain as the one that it's on is considered an internal link.

Being linked from within the menu links on every page is likely to be considered more important than being linked from a single piece of content.

Just imagine you were Google trying to decide which page is more important:


One that is linked to from every page in the header (so a lot of visitors will see it)
Or one that is linked to from a single piece of content (so, in all likelihood, much less visitors will see it)

10% popularity Vote Up Vote Down


Report

 query : Schema.org and SEO Is Schema.org structured data (in Microdata, JSON-LD, or RDFa) relevant for SEO? I know that structured data can provide benefits in other areas, but here I’m only asking

@Sherry384

Posted in: #SchemaOrg #Seo #StructuredData

Is Schema.org structured data (in Microdata, JSON-LD, or RDFa) relevant for SEO?

I know that structured data can provide benefits in other areas, but here I’m only asking about its significance for general web search engines.

10.01% popularity Vote Up Vote Down


Report

 query : Re: X-Frame-Options Enabling on GoDaddy I am currently developing a website which runs on a GoDaddy Windows with Plesk hosting option. I am doing this as an experimental/learning site so I am trying

@Sherry384

You're running on Windows, which makes it likely that you're using IIS. Assuming this is the case, this post on StackOverflow has a discussion on how to set the X-Frame-Options header properly (https://stackoverflow.com/questions/25316781/x-frame-options-not-working-iis-web-config).

In short, it states that:

"An easy work around is to set the headers manually using:"

Response.AddHeader("X-Frame-Options", "DENY");

In the lesser likely case that you're running on Apache, you can find the information you need here: stackoverflow.com/questions/17092154/x-frame-options-on-apache

10% popularity Vote Up Vote Down


Report

 query : Re: Structured Data (schema.org) ItemList, Article, or both? I am adding structured data to a new page which could sensibly be defined as either an article (http://schema.org/Article) or an Item List

@Sherry384

You can use both, if the ItemList is the primary thing the Article is about.

Use the mainEntity property of the Article type to provide the ItemList.

mainEntity is typically used to denote the primary thing a WebPage is about (e.g., probably the Article in your case), but this is not the only way how it can be used (bold emphasis mine):


Indicates the primary entity described in some page or other CreativeWork.


You could use mainEntity for both ways in the same document:

<body vocab="http://schema.org/" typeof="ItemPage">

<article property="mainEntity" typeof="Article">

<ul property="mainEntity" typeof="ItemList">
</ul>

</article>

</body>


If the ItemList is not the primary thing the Article is about, there doesn’t seem to be a suitable property to link these items. The hasPart property can’t be used, because it expects a CreativeWork as value, but ItemList isn’t one.

10% popularity Vote Up Vote Down


Report

 query : Re: Do rel='next' & rel='prev' stop all pagination from being indexed but the first? On an ecommerce website the category page of a type of product has pagination. A view all system isn't used because

@Sherry384

No, the rel="prev" and rel="next" won't stop all the pages but the first being indexed, and that is exactly what you want it to do.

The rel tags are instead used to indicate to Google that these set of pages are connected in a 'paginated' setup. After finding those tags on a page, Google will work its magic with regards to crawling, indexing and ranking those pages. Afterwards, it might, or might not, decide to crawl those pages (at a certain rate) and/or keep those pages in the index and/or let those pages rank.

Sources:

support.google.com/webmasters/answer/1663744?hl=en https://webmasters.googleblog.com/2011/09/pagination-with-relnext-and-relprev.html

10% popularity Vote Up Vote Down


Report

 query : Adsense ads only for one topic? In my AdSense account under sites I can block some categories, but this is limited and I cannot do it exactly, so my question is it is possible to limit ads

@Sherry384

Posted in: #Google #GoogleAdsense

In my AdSense account under sites I can block some categories, but this is limited and I cannot do it exactly, so my question is it is possible to limit ads only for games?

10% popularity Vote Up Vote Down


Report

 query : Determine Google search term for page hits I am a Web developer but my knowledge of SEO tools and techniques is scant. An SEO professional told me that Alexa SEO tool has the ability to determine

@Sherry384

Posted in: #Alexa #Google #GoogleAnalytics #GoogleSearch #Seo

I am a Web developer but my knowledge of SEO tools and techniques is scant. An SEO professional told me that Alexa SEO tool has the ability to determine the Google search terms leading to page hits. However, I did not think it was possible for anybody except Google to know the search terms leading to page hits. Unless, of course, Google has allowed them to access analytics information.

So, is there any technique that can be used to determine the Google search terms directly (without subscribing to Google Analytics)? Also, is it true that Alexa SEO knows those search terms?

10.01% popularity Vote Up Vote Down


Report

Back to top | Use Dark Theme