logo vmapp.org

@Steve110

Steve110

Last seen: Tue 11 May, 2021

Signature:

Recent posts

 query : Re: Will quick links to sub sections look like keyword stuffing to search engines? I have a query regarding the use of quick links (table of contents) for lengthy posts. On my website, my posts

@Steve110

It's not keyword stuffing to have the same combination of words on a page if that combination is helpful to the user. In your case, the table of contents and the h2 tag duplication is likely very helpful to the users.

Wikipedia uses the exact model that you are using, and they rank amazingly. Their table of contents also has

<a href="#anatomy">Anatomy</a>

<h3><div id="#anatomy">Anatomy</div></h3>


Source: en.wikipedia.org/wiki/Cat#Anatomy
Using proper anchor text, proper id=text, and proper text within the h3 tag really helps Google understand their page and it also is really useful for the visitor.

A lot of alternative wikis use this exact same format, and many of them rank very highly.

If your articles are that long, a table of contents like you're thinking about using is probably very beneficial for your site. And longer articles usually rank higher than shorter ones.

10% popularity Vote Up Vote Down


Report

 query : Re: Can buying ads improve the SEO of the my page? I have a page which I would like to ranking Google for a specific keyword. This is a not the home page. This is a sub-page. Traffic is

@Steve110

Anything that increases your web presence is likely to positively impact your SEO. Ads increase your brand awareness. If these ads causes users to share your content on social media, organically link to your page, talk about about your brand, or search for your company on Google you SERP position is likely to go up.

But you shouldn't expect the links pointing to your site from ads to increase your rankings. Many ads add the rel="nofollow" tag so the backlink won't count. And Google has said that buying backlinks is a big no no. If you buy an ad to your site with a paid for backlink it's possible that you may get penalized for it if Google finds out. This is just something you have to think about.

As for advertising in general though, yes it will help your brand. And anything that helps your brand that is in line with Google Webmaster Guidelines is likely to increase your rankings.

10% popularity Vote Up Vote Down


Report

 query : Re: Google search results show "Loading, downloading assets" for all site pages because Googlebot is not fully running JavaScript I host maps. Each map has its own unique URL. But all these URLs

@Steve110

Google can crawl javascript, but it's not guaranteed that it will be able to read all types of javascript.

The meta tags of a page are loaded in the html, and Google reads this. There is no certainty that Google will change the meta tags based on your javascript.

The following code was reported to work in changing a site's meta tags while having Googlebot properly read it to update the SERP tags. I have personally not tested it.

<script>
var m = document.createElement('meta');
m.name = 'description';
m.content = 'This tutorial has some helpful information for you, if you want to track how many hits come from browsers where JavaScript has been disabled.';
document.head.appendChild(m);
</script>


Another one:

document.title = "Google will run this JS and show the title in the search results!";


It is better if you can update the tags in PHP, but if you have to do it in JS you can try these out. If you do and they work, please do let me know as I might like to use this script later on as well.

10% popularity Vote Up Vote Down


Report

 query : Re: Hosting a blog on Google Cloud could help in SEO ranking? We usually host our blogs on shared hosting providers. These service providers use same shared space and IP for hundreds of blogs and

@Steve110

Google's algorithm is evolving passed what any one person could understand. That even includes the very Google engineers themselves who have developed the algorithm.

What we do know is that various parts of the Google algorithm have been instructed to do various tasks to improve Google's usage rate.

For instance, Youtube's algorithm has been built around the concept of "increase watch time". So the recommended videos that Youtube provides to you have been shown to you because it thinks it will keep you on the Youtube Platform longer.

I imagine there is a high likelihood that there are algorithms in place to increase Google Cloud usage, to increase Adsense clicks, etc etc. As a result, it's quite possible that what you're suspecting may be true. And if it's not an additional ranking signal now, it could be in the future. Ultimately, using Google Cloud probably doesn't have much downside (if the servers work properly). But there is a hidden upside that may or may not exist.

10% popularity Vote Up Vote Down


Report

 query : Re: Impact of subdomain urls on SEO I have an application which is hosted on a subdomain URL of my client's primary domain. Below is the example of how my primary domain is and how the subdomain

@Steve110

Subdomains are considered separate domains from their parent domain. Google will indeed choose whether to rank replacephone-carriername.example.com or to rank example.com. So in a sense yes, they can be considered competitors of each other as they will rank separately.

If the content on the subdomain isn't as great as your main domain, you shouldn't expect it to rank over your main domain. This is especially the case if you have inbound links coming to your main domain and not to your sub.

If your subdomain has a rel follow link pointing back to the main domain, this will pass juice to the main. You may also want to add a nofollow link when linking the main site to the subdomain.

Overall, I wouldn't be too concerned about your subdomain ranking over your main domain. If and when that happens you can then use canonical tags, 301s and followed links to place priority to your main domain.

But for now if I were you, I would simply try to rank both the main domain and the subdomain. If Google sends your subdomain traffic that it wouldn't have sent to your main domain, it's worth it.

10% popularity Vote Up Vote Down


Report

 query : Re: Redirects of only some pages from parent company site to new website We have have recently split from our parent company and are now an independent company with a brand new website. The parent

@Steve110

Link juice degrades with how far away the link is from the homepage and other important pages on the site.

If a homepage links to another site it passes partial juice.

If a homepage links to an internal page that links to a site the juice is degraded between the homepage to the internal page, and then degraded again from the internal page to the site.

It is much better for your SEO to have links coming directly from the parent site on all of their major pages. If the link is coming from a a subpage the link juice degrades substantially.

Perhaps even more important is page authority. If the parent site is linking directly to you from pages that are ranking on Google, these carry tremendous link juice. If instead the parent site is going to link to a subsidiary-page.html that talks about your site, and then links to you, these pages are worth far less. The subsidiary-page.html has very low page authority. It's not currently ranking on Google for any keywords, and it probably never will rank for much because it doesn't have a whole lot of important content on it. It's just a page that links to your site.

If you're linked to from subsidiary-page.html, you've been indirectly linked to from a high authority domain, but from a page with low page authority. Not the greatest.

If you're linked to from many pages that are ranking on Google, you've been directly linked to from a high authority domain and linked to by many high authority pages. Great!

10% popularity Vote Up Vote Down


Report

 query : Re: Why do some data analysis experts recommend having two Google Analytics accounts? I read some articles about data-analysis and they recommend 2 Google Analytics accounts for better tracking. I

@Steve110

There are probably some reasons why you might want to send pageview trackers to multiple Analytics accounts, though having multiple analytics accounts isn't at all necessary and I use just one.

You can have the analytics code send a pageview to two tracker IDs. The .js code required is only loaded once, and the sent view doesn't take up much bandwidth on the client's end nor should it slow down page load time.

You can have an analytics account that tracks all of your websites. Simply put that analytics id into all of your site's analytics tracking code and you can view the pageview counts for all of your sites each day and in real time with one simple analytics tab. You can also add in a tracking id code for individual sites so that you can jump to that site in analytics and view the stats for that individual site.

A person might also want different analytics tracking code if they've heavily modified their analytics data with different customizations and events, and still want to have an analytics account that shows the default information as well.

This is also useful if you want a tracking code on some pages but not all of your pages and you want to easily access that part of your site. So you can have a tracking code for example.com/*, example.com/category/, example.com/news, etc. This could be valuable if you want to be able to share the analytics for your example.com/news with other email addresses but don't want them to have access to all of your example.com analytics.

I'm sure there's some other reasons why someone might want multiple analytics accounts on one webpage. Most of it has to do with being a real stats junky. One analytics tracking id will do the job unless you have specific reasons for why more than one is useful to you.

10% popularity Vote Up Vote Down


Report

 query : Re: Should I use sitewide backlinks from relevant niche? I have 4 websites based on a specific niche, So, should I get Sitewide backlinks from all sites to the new one? Example: Site 1 Site 2

@Steve110

Google has hit webmasters with penalties for running Personal Blog Networks (PBN) in the past. A PBN is when you have several websites that link to your main website to rank your main website higher. Google took action against PBNs because they found that people were using black hat tactics in buying domains that had ranking and indexing by Google, putting up a mediocre site on those domains and then linking those domains to their main website. The tactic worked in the past and caused websites to rank much higher than they should of because of this tactic.

Links should flow from websites to your site organically and naturally. It is fine if you have 5 good websites and you want 4 of them to link to your 5th one. There are a lot of major networks that do this when they have sub-sites.

But there is a danger here in that if the algorithm thinks that your 4 websites only exist to link to the 5th one and that the 4 websites have very poor content that this could be classified as a PBN. If your 4 websites are fantastic sites that rank well with Google, and Google trusts these sites as great content, then it could associate your 5th site positively because of its trust with your 4 websites. If on the other hand your 4 websites score poorly with Google, this could pass over negatively to your 5th site. Even if all 5 of your sites are excellent, the risk of a PBN might be high.

I've been considering linking all of my sites to one main network site for some time now, but the risk of being classified incorrectly as a PBN has been worrisome enough for me that I've shied away from it. If I decide to have my sites link to one main site I'm thinking of using the nofollow tag just so that it's not classified as a PBN.

If anyone has any articles or evidence that linking all of your sites to one main site is safe I'd appreciate this information as well.

10% popularity Vote Up Vote Down


Report

 query : Re: How to improve SEO ranking of keyword with least content on the webpage? I have a challenge to rank my page for few keywords. But the dilemma is my page is bound to have very less content

@Steve110

What you're trying to accomplish isn't necessarily a very good strategy, depending on what kind of keywords you are trying to rank for.

If you are trying to rank for "cheesecake recipe" and just keyword stuff a lot of cheesecake keywords onto the page your entire site could even get hit with a keyword stuffing penalty. The page has nothing to do with cheesecakes and you're trying to rank for something you shouldn't be ranking for.

If on the other hand you're trying to rank your login page for "example brand login", that is fine. That is what the page is about and it should rank for it.

Google isn't going to rank pages solely based on keyword density. You can't just stuff keywords all over the place and expect to rise to the top. When a person searches for "cheesecake recipe", Google is going to show pages that it thinks has the absolute best recipes posted on it. It's not going to show your login page almost regardless of what you do.

If you want your login page to rank for a keyword like "cheesecake recipe" then you literally have no choice but to put an amazing cheesecake recipe on the page and then you have a chance. Your h1, title, density and other signals will almost certainly not rank you for this. You need the correct content and good backlinks help.

If you use invisible content trying to trick Googlebot you might as well get a min wage job, cause you can expect to get your site completely deindexed. Google doesn't reward people for trying to trick its bot.

Just make good content for the keywords you are trying to rank for. Google wants to show the absolute best page on the web for what a person is searching for. If you can make the best content then all you can do is hope that Google agrees.

10% popularity Vote Up Vote Down


Report

 query : Re: Facebook crawler with no user agent spamming our site in possible DoS attack Crawlers registered to Facebook (ipv6 ending in :face:b00c::1) were slamming our site, seeing 10s of thousands of hits

@Steve110

Sources say that Facebook/Externalhit does not respect crawl-delay in robots.txt because Facebook doesn't use a crawler, it uses a scraper.

Whenever one of your pages is shared on Facebook, it scrapes your site for your meta title, description and image.

My guess is that if Facebook is scraping your site 11,000 times in 15 minutes then I think the most likely scenario is that someone has figured out how to abuse the Facebook scraper to DDOS your site.

Perhaps they are running a bot that is clicking on your share link over and over, and Facebook is scraping your page every time that it does.

Off the top of my head, the first thing that I would try to do is cache the pages that Facebook is scraping. You can do this in htaccess. This will hopefully tell Facebook not to load your page with every share until the cache expires.

Because of your issue, I would set the html expiry to longer than usual

In .htaccess:

<IfModule mod_expires.c>
ExpiresActive On
ExpiresDefault "access plus 60 seconds"
ExpiresByType text/html "access plus 900 seconds"

</IfModule>


Setting html to expire at 900 seconds will hopefully prevent Facebook from crawling any individual page at more than once per 15 minutes.



Edit:
I ran a quick search and found a page written a few years ago that discusses the very issue you're encountering now. This person discovered that websites could be flooded by the Facebook scraper through its share feature. He reported it to Facebook but they chose to do nothing about it. Perhaps the article will more clearly tell you what is happening to you and maybe it can lead you in the right direction as to how you'd like to deal with the situation:
chr13.com/2014/04/20/using-facebook-notes-to-ddos-any-website/

10% popularity Vote Up Vote Down


Report

 query : Re: Why is google not indexing (or ignoring) those 200,000+ pages ? I was wonrdering why on google search console, 200,000+ of my pages are not indexed (it's not because of robots.txt or volontary

@Steve110

I was running a site last year that had about 30 million pages indexed with Google. I found that Google would crawl many pages but choose not to index a large percentage of them. I felt that many of my best pages that were crawled weren't indexed while pages with far weaker content were. Ultimately I concluded that Google chooses which pages to index and which pages not to and I had very little control over the selection process. Even proper internal linking with many links pointing to a specific page didn't mean it would get indexed. Pages with very few, in some cases even zero internal links were often indexed.

Google may discover links on your site but choose not to crawl those pages. And it may crawl pages and choose not to index them even if they are high quality pages.

I wish there was a simple fix for this issue to help you. But in the end I think you'll just have to cross your fingers and hope the pages you want indexed end up there.

If you can increase the links pointing to that page from external domains this will send a huge signal to Google that it needs to index them because those pages will have larger page authority than other pages on your site.

If you think there could be errors in the crawling process of these pages you should always try fetch and render with Googlebot. Enter some of the URLs into the following link and click fetch and render, then click on the result after Google has crawled it. Google will show you a screenshot of the page it has crawled which will let you know if it sees all of your content or if there are any issues. If it sees all of your content and still chooses not to index it then it's simply an algorithmic thing on Google's end and there is only so much you can do to improve your index status.
www.google.com/webmasters/tools/googlebot-fetch?hl=en&siteUrl=
This is a screenshot of search console for one of my sites. Google has chosen to exclude many of my URLs as well.

10% popularity Vote Up Vote Down


Report

 query : Open graph data not visible in iFramely or WhatsApp despite working on Facebook I have this self-hosted WordPress site. I have added all the needed meta tags. They do come up on Facebook open

@Steve110

Posted in: #MetaDescription #MetaTags #OpenGraphProtocol

I have this self-hosted WordPress site. I have added all the needed meta tags. They do come up on Facebook open graph debugger and show up when sharing links.

They don't work anywhere else. iFramely debugger cannot find any meta tags, also sharing on WhatsApp or telegram doesn't show any preview.

What do I need to do to get all these other places to recognize my OG data?

10% popularity Vote Up Vote Down


Report

 query : Re: Does constantly doing small edits to a website affect its SEO? Are there SEO reasons to not keep a page static for several years even though it doesn't really need to be changed content wise?

@Steve110

The top SEO specialists and SEO websites have confirmed that updating your page routinely does impact your SEO.

When you search Google for keywords, you'll find that most of the results are generally newer. And older results usually aren't listed unless your search is very obscure.

This is because Google knows that newer articles are often more valuable than older ones. The content is usually fresher, more updated and current than an old source. Newer articles are simply just more relevant than older ones.

Google sees the last time you modified the page.

If you delete a sentence and add a new one just to update the page, Google is still just an AI algorithm. And so it doesn't really know if the new sentence is useful or if you deleted an old sentence that is no longer relevant. So in a way, updating your webpages just to appease Google is in some ways gaming the system. It's making an old article appear new to Googlebot when it's not. But just about every SEO expert advises doing this and from all that I've read, I haven't seen anyone get hit with any penalties for updating their pages.

It's advised to keep your pages regularly updated.

10% popularity Vote Up Vote Down


Report

 query : Re: How do I specify the HEVC codec in the HTML5 video source type attribute? (Originally posted on Stack Overflow) I'd like to load and play a smaller HEVC-encoded video on web browsers with support

@Steve110

I think the issue that you've likely having here is that you have ordered the non .hecv video second

<source src="clip-hevc.mp4" type="video/mp4; codecs=hevc">
<source src="clip.mp4" type="video/mp4; codecs=avc1">


This is probably telling the browser load clip-hevc.mp4, then telling the browser to load clip.mp4 instead.

If you order them the opposite way this will likely tell the browser to load clip.mp4 and then load clip.hevc.mp4 instead:

<source src="clip.mp4" type="video/mp4; codecs=avc1">
<source src="clip-hevc.mp4" type="video/mp4; codecs=hevc">


So I think that is the glitchy issue that you are encountering. If you reorder clip-hevc to be at the bottom it will probably load hevc if possible.

I can't however guarantee that this will not load hevc if it isn't browser compatible. You will probably want to test it on a device that does not support hecv and see if it loads clip.mp4 instead.

There are also ways to check which codecs are compatible with the browser. Here is one example:

var testEl = document.createElement( "video" ),
mpeg4, h264, ogg, webm;
if ( testEl.canPlayType ) {
// Check for MPEG-4 support
mpeg4 = "" !== testEl.canPlayType( 'video/mp4; codecs="mp4v.20.8"' );

// Check for h264 support
h264 = "" !== ( testEl.canPlayType( 'video/mp4; codecs="avc1.42E01E"' )
|| testEl.canPlayType( 'video/mp4; codecs="avc1.42E01E, mp4a.40.2"' ) );

// Check for Ogg support
ogg = "" !== testEl.canPlayType( 'video/ogg; codecs="theora"' );

// Check for Webm support
webm = "" !== testEl.canPlayType( 'video/webm; codecs="vp8, vorbis"' );
}

10% popularity Vote Up Vote Down


Report

 query : Re: Is using Facebook advertising's "buyers" target a good way to lure likely shoppers? I need to advertise a website to sell products related to Saint Seiya, so I decided to begin with Facebook

@Steve110

is a low enough budget that you don't have a large enough sample size. It can be fairly normal for companies to have customer acquisition costs of -0. Depending on how competitive and lucrative your field, it can be even higher. As a result, you may expect only 1-2 customers from your marketing investment. And due to probabilities, it's quite possible to land 0 customers in a experiment.

Despite being a small sample size from your investment, it may be that your ad campaign doesn't work and that you do need to decide how to target differently. But it's a bit soon to say for sure.

10% popularity Vote Up Vote Down


Report

 query : Re: Will moving a Hindi website from a subdomain to subdirectory under an English website affect its SEO? We have recently changed our Hindi website hosted under a subdomain to a subdirectory of

@Steve110

If you have geo-targeting for your subdomain and change it to a directory on your site you will indeed lose the geo-targeting for that subdomain.

Note, moderator Stephen Ostermiller has recently shown me that you can target individual pages by country code using

<link rel="alternate" href="http://example.com/en-ie" hreflang="en-ie" />
<link rel="alternate" href="http://example.com/en-ca" hreflang="en-ca" />
<link rel="alternate" href="http://example.com/en-au" hreflang="en-au" />


In your case, to target India with an English page you may want

<link rel="alternate" href="http://example.com/en-in" hreflang="en-in" />

support.google.com/webmasters/answer/189077?hl=en

10% popularity Vote Up Vote Down


Report

 query : Re: SEO for dynamically multilanguage web site I make a web page with dynamically changing language with php. I make html with $lang key <span class="slicice_opis"> <?= $lang['LIJEVO_PRVI_OPIS_PARAGRAF_JEDAN'];

@Steve110

If you're going to use the "?" parameter on your site, you should probably notify Google as to how to react to these parameters. You can tell Google how to treat your parameters at www.google.com/webmasters/tools/crawl-url-parameters?hl=en&siteUrl=
Make sure you are using the hreflang="x" tag if you want to signal to Google what language your page is in. Learn more here: support.google.com/webmasters/answer/189077?hl=en
If you want to simplify the process and not have to inform Google about your "?" parameter pages, you can simply change the URL of the page altogether. Google will index both example.com/page/en and example.com/page/es

10% popularity Vote Up Vote Down


Report

 query : Re: Thousands of links from Russian Sites I've recently checked into my Google Webmasters account and found that I have several thousand links from .ru domains. Almost half of all links from one

@Steve110

If you believe that you are receiving links that can hurt your site, you will want to disavow them.

You can disavow links here: www.google.com/webmasters/tools/disavow-links-main
There have been issues of competitors trying to derank other websites by setting spammy link campaigns targeting that site. The simplest way to combat this is with the disavow tool.

Before disavowing, I advise that you analyze the links closely and determine for yourself if you think they are spammy or detrimental to your site. There are a lot of valid Russian websites on the internet that may be linking to you. And there are also many Russian bot campaigns on the web that will perform malicious activity and negative link campaigns as well.

10% popularity Vote Up Vote Down


Report

 query : Re: What can cause "Discovered - currently not indexed" in the new GWT The new GWT shows sitemaps links divided into new categories. Two that confuse me: 1. Discovered- currently not indexed 2.

@Steve110

Google may discover and crawl your pages, but it does not mean that it will necessarily index them.

There are many reasons why Google might not index a page. Perhaps it found duplicate content. Perhaps it does not feel that it offers enough value for any specific search queries. Google may have found something about your page that it doesn't like.

Whatever the reason might be, Google just hasn't decided to index some of the pages that it has discovered and crawled on your site. It's pretty normal for some of your pages not to be indexed. Some of the pages on my sites aren't indexed despite it having better content than many indexed pages and having a substantial amount of internal links. Google's indexing system is a machine learning algorithm. And so it may decide not to index some pages sometimes for a variety of unknown reasons.

10% popularity Vote Up Vote Down


Report

 query : Re: How can a site contain duplicate content that's required, but not affect its SEO? I am making a site for a specific niche. This site is very similar: Sample Article 01: https://alternativeto.net/software/google-chrome/

@Steve110

Your above the fold content is not duplicate. It looks like you've structured your site to show Google that one page is about Chrome and another page is about Firefox. As a result, Google should know which page to send its users when they search for Chrome or Firefox.

As long as you've cleary identified to Google what the page is about using keyword density for the keyword "Chrome" or "FireFox", and have different title and meta description tags, I think this should be fine.

It is normal for sites to sometimes have similar content on pages for similar products. Your below the fold related browser software isn't quite targeting the keyword of the browser name. It is just offering further content rich material for the visitor on the page.

I don't think you should expect any duplication penalties for this.

If you want to increase the chances that these pages rank, you may want to enhance the original content of them as much as you can. Google rewards pages for having many words on them. Over 1500 words seems best.

10% popularity Vote Up Vote Down


Report

 query : Designing an HTML form input that takes a range of numbers as well as text options I have the following form: <table class="table"> <thead> <tr>

@Steve110

Posted in: #Django #Forms #Html #Python #WebsiteDesign

I have the following form:

<table class="table">
<thead>
<tr>

<th><h4>Monday</h4></th>
<th><h4>Tuesday</h4></th>
<th><h4>Wednesday</h4></th>
<th><h4>Thursday</h4></th>
<th><h4>Friday</h4></th>
<th><h4>Add</h4></th>
</tr>
</thead>
<tbody>
<tr>
<form id="tracking_form">
<td><input type="number" name="monday" class="weekday_points" min="0" max="100" required id="id_monday" /></td>
<td><input type="number" name="tuesday" class="weekday_points" min="0" max="100" required id="id_tuesday" /></td>
<td><input type="number" name="wednesday" class="weekday_points" min="0" max="100" required id="id_wednesday" /></td>
<td><input type="number" name="thursday" class="weekday_points" min="0" max="100" required id="id_thursday" /></td>
<td><input type="number" name="friday" class="weekday_points" min="0" max="100" required id="id_friday" /></td>
<td><input type="button" class="btn btn-default" onclick="submitIfUnique()" value="Add Points"></td>
</form>
</tr>
</tbody>
<tfoot>

</tfoot>
</table>


Where a user enters a number from 0 to 100 as a score for a student. I also want to track whether the student was absent, whether the absence was explained, etc. If the student was absent, then the user shouldn't then be able to enter a score. What would be a good form design for a use case like this?

I'm working in Django, so I could change the weekday number inputs to text inputs and provide choices like this:

settings.py:

SCORE = ['Present', 'Absent (Not Explained)', 'Absent (Explained)', 'Integrating'] + [x for x in range(0, 101)]


models.py:

monday = models.CharField(choices=settings.SCORE)
tuesday = models.CharField(choices=settings.SCORE)
wednesday = models.CharField(choices=settings.SCORE)
thursday = models.CharField(choices=settings.SCORE)
friday = models.CharField(choices=settings.SCORE)


The problem is, the input would now be a dropdown with over 100 options, which which makes it more difficult for a user who just wants to type a number

The other thing I though of is to split the above solution into two fields; the first for whether the student was 'Present', 'Absent (Not Explained)', 'Absent (Explained)', or 'Integrating'. Then, if they were present the number field appears which let's the user enter the score, but this seems difficult, and my form already takes the full width of the screen so adding even more fields is less than ideal.

Are there any other simple options available to me design wise? Nothing I've come up with so far seems feasible.

10% popularity Vote Up Vote Down


Report

 query : Re: How to use toggleToc() in a MediaWiki installation I admin a wiki site running MediaWiki 1.29 and I have a problem collapsing the TOC on pages. I would be interesting in keeping the Contents

@Steve110

In the media wiki link provided above, it explains that the following code needs to be implemented:

// The enclosed code runs only after the page has been loaded and parsed.
window.addEventListener('DOMContentLoaded', function() {
try {
// Detect whether the page's TOC is being displayed.
if (document.getElementById('toc').getElementsByTagName('ul')[0].style.display != 'none') {
// Use MW's toggleToc() to hide TOC, change "hide/show" link text, and set cookie.
toggleToc();
}
} catch (exception) {
// Probably this page doesn't have a TOC, ignore the exception to prevent console clutter.
}
}, false);


If you have implemented the above code on your website and it has not be functioning properly, my best understanding of the code above is as follows:


`window.addEventListener('DOMContentLoaded',


This listens for the page content to be loaded, then performs the following function





if (document.getElementById('toc').getElementsByTagName('ul')[0].style.display != 'none') { toggleToc();


If the elements in name="toc" are NOT set to
display:none, then perform the function "toggleToc"




You first must make sure that the element defined by name="toc" is not set to display:none. And then you must make sure that somewhere on your page or in your .js files that the function toggleToc has been set. If toggleToc has NOT yet been set or defined then you must write this code yourself or find the code online and implement it on your page or .js files.


Since you have received the notice of:

ReferenceError: toggleToc is not defined


My best guess is that you do not actually have the function toggleToc
in any of your .js files or in on your webpage. I would check to make
sure that it is there.


Hope I was of help to you.

10% popularity Vote Up Vote Down


Report

 query : Re: When to render/build my html that changes frequently? I am building a simple website that will display some information. This information can be retreived from a third-party service and changes

@Steve110

Using file_get_contents to retrieve the page and file_put_contents with an expiry to store the page can do what you are requesting in PHP language.

You can use Javascript or AJAX to render the page from the remote server. But you should note that Google doesn't tend to count Javascript code on your page as content. And so any content that is rendered in Javascript is likely not to help you rank in the search results.

10% popularity Vote Up Vote Down


Report

 query : Re: My blog post lost all rankings after posting the same content on Medium and don't have it back after removing the Medium post I copied my article to a medium post, I lost all my ranking.

@Steve110

Google is supposed to identify which articles were published to which websites first in order to prevent plagiarism on the web. Unfortunately, many webmasters have experienced similar situations in which their content was copied and published by a 3rd party, and the 3rd party went on to rank higher than them in the SERPs as a result.

As others have mentioned, you should be using a rel=canonical tag on your Medium posts.

View the post on Medium and right click->view page source. Control+F (search) for canonical. If you cannot find the canonical tag then Medium is not identifying your article as the original source and you need to correct this.

One method to help ensure rankings boost is to link to your own original article in the Medium post as a "source: example.com/article". This should help signal to Google where the article originally came from as well as boost traffic to your blog from Medium.

If rel=canonical has been established in the Medium post, then Google has either deranked your article for other reasons or it's simply bypassing the canonical tag and is still favoring Medium over your own blog.

10% popularity Vote Up Vote Down


Report

 query : Re: What are the best ways to increase a site's position in Google? What are the best ways to increase a site's position in Google?

@Steve110

There are some basic rules to increase your Google rankings


Offer the absolute best content on specific keywords that you are targeting.
Get links from highly trusted sources, especially if they are related to your niche.
Have text and image rich pages. Google wants to read words that it has never read before, and it wants to see amazing images related to a specific topic.
Dominate your niche. If your page or site is about "cupcakes", google wants to send traffic to the absolute, most trusted "cupcake" websites. Dominating your niche is not an easy task to achieve. But if you can do that then you are sure to rank for keywords in your niche.
Have strong user engagement on your page. When google sends users to your page, if Google finds that the user's metrics of time on page, clicks, shares are high, and bounce rate is low, Google is going to realize that users really love your page. And it is going to send you more traffic. Figure out how to make pages that organic search traffic visitors will absolutely love.
Have an incredibly high click through rate in the SERPs. When a user searches for "Vanilla Cupcakes" and clicks on your page about vanilla ice cream cupcakes but does not click on someone else's page of vanilla frosting cupcakes, Google realizes that users want "Vanilla Icecream Cupcakes" more often when they are searching for "Vanilla Cupcakes" and Google will start ranking your page higher than the page about "Vanilla Frosting Cupcakes" in the search results. In other words, make sure users are clicking on YOUR page when they see it on Google search. This can be done with having a great url such as example.com/vanilla-icecream-cupcakes, as well as having a great title and description. If you are in position #4 for a specific keyword, and users are clicking on your result much more often than usual, Google is going to start ranking you #3 , #2 and so forth.

10% popularity Vote Up Vote Down


Report

 query : Re: What is `/&wd=test` URL that is being requested from my site, probably by bots I'm seeing error logs on a website because something tried to access: example.com/&wd=test the HTTP_REFERER

@Steve110

It's most likely Baidu just being weird, and their bots sending traffic to invalid URLs.

If Baidu is sending you traffic, even to weird parameters such as that I wouldn't necessarily start blocking their bot. But you block them if you want in your robots.txt and because Baidu is a legitimate crawler, they should most likely respect your robots.txt file. Baidu is a major Chinese search engine.

The invalid URLs that they are sending traffic to shouldn't cause any issues for your site unless there is something on your server that causes errors in this regard. Most likely, the visitor will just be sent an invalid page.

You can remove these ? parameters in your htaccess file and 301 redirect back to the correct page. If you want to do that, simply run a Google search for "remove everything after htaccess stackoverflow". Remove every character after the question mark, and then have htaccess simply remove the question mark itself. This will keep the user on the page that was intended.

10% popularity Vote Up Vote Down


Report

 query : How to force Google and other bots to pick actual images and not thumbnails? For example, if there is an online shopping websites with thousands of small thumbnails of products and when you

@Steve110

Posted in: #Images #Seo #WebCrawlers

For example, if there is an online shopping websites with thousands of small thumbnails of products and when you click over them it shows actual retina/full HD image over a popup. In this scenario, how we can ensure that Google or other bots should crawl only the retina/HD image and not the thumbnail when someone searches in Google images?

10.02% popularity Vote Up Vote Down


Report

 query : Re: Is link building a white hat SEO activity? Link building is the process of manually building links in any way possible to show the site in question as having more of a value to search engines

@Steve110

Links aren't everything in SEO, but a big share of the search engines' algorithms depends on your websites’ incoming and outgoing link to rank it on SERP.

Through links, engines can not only examine the popularity websites and pages, but also metrics like trust, spam, and authority.

So, link building is not a black hat SEO technique unless you doing it correctly without any spam.

A website like Wikipedia has abundant of varied sites linking to it, which means it's undoubtedly a popular and significant site. To gain trust and authority you'll need the help of other link partners. The more popular, the better.

One of the robust signs the engines use in rankings is anchor text. If loads of links point to a page with the accurate keywords, that page has a very good possibility of ranking well for the targeted phrase in that anchor text.

10% popularity Vote Up Vote Down


Report

 query : Is the time PHP takes to render a page and serve it to the client ever a problem? PHP is a ubiquitous language that among other things allows a server to render a webpage on request and

@Steve110

Posted in: #Latency #Render #Server

PHP is a ubiquitous language that among other things allows a server to render a webpage on request and serve the resulting HTML to the client.

For example, the served page might be different for different clients and the server "builds" the HTML per request.

I assume that this in general is practically instantaneous.
But can this be a problem for more complex webpages?
(Example: if by rendering a page the server needs to acess a third party service that may have its own latency)

If so, what solutions can we implement to avoid this problem?

10.03% popularity Vote Up Vote Down


Report

Back to top | Use Dark Theme