logo vmapp.org

@Holmes151

Holmes151

Last seen: Tue 11 May, 2021

Signature:

Recent posts

 query : How do I embed Google Analytics in a dashboard? There seem to be 3 methods as listed below. The third is superior for what i'm after. How does it work? Where is this process documented?

@Holmes151

Posted in: #GoogleAnalytics

There seem to be 3 methods as listed below. The third is superior for what i'm after. How does it work? Where is this process documented?


using the Google Embed API like shown in the "Basic Dashboard" example Embed API documentation. It uses a one click login method and requires you to know and enter user credentials for every session. This is what I use right now.
The Google Embed API documentation describes "Server-side Authorization". It uses a key in a JSON file made via the Google Developer Console. It can be authorized indefinately to access any kind of datasource enabled in the Google Developer Console.
There seems to be a third method. It is used for example in Google Analytics for WordPress. It uses a one time approval which leads to a key string as shown in this video by which the plugin can be authorized indefinately. The Google Developer Console is not accessed and no JSON file is uploaded anywhere. This offers the best UX.

10.01% popularity Vote Up Vote Down


Report

 query : The profitability of AdSense on a new website Please note: I've read lots of previous answers like this that seem to argue AdSense is never worth it and that other ad networks should be relied

@Holmes151

Posted in: #Advertising #GoogleAdsense #Monetize #WebDevelopment #WebHosting

Please note: I've read lots of previous answers like this that seem to argue AdSense is never worth it and that other ad networks should be relied on, but those questions are almost a decade old (which may as well be three decades in the world of web development and internet marketing), and I've read tons of more recent questions that indicate the vast majority of users on this site are still using AdSense.

For that reason, I'm wanting modern, considered answers that apply to the current landscape of the web with all the challenges that come with it (the dominance of Google, the recent rise of ad-blocking measures, etc.) from experts who have current, first-hand experience of it.



I'm considering setting up a website in a technology-related niche that I know well. It will be the first of its kind with unique content that very few people have knowledge of and no-one has ever bothered to document in an easily accessible form, so I expect it to have demand. However, the size of the niche places an inherent limit on that demand. I'm not intending for the site to be my living, but it'd be nice to make back the effort of writing up the content for it, and I especially don't want to make a loss on running it.

With those reasons in mind, I have a few questions that I would really appreciate answers to.

1. When starting out, roughly how many daily unique visitors would my site need to reach in the current climate before AdSense begins to pay for its domain and hosting costs?

I'm aware that Google measures ads per 1000 impressions i.e. CPM. Does this essentially mean that unless I'm regularly getting 1000 impressions on ads AdSense will leave me at a loss?

2. Secondly, I've read that higher volume sites stand to gain more from networks that aren't AdSense. Does this mean that AdSense is the best-paying advertisement network for a site that's starting out and has little visitors? If it isn't, what are?



I understand that these questions are probably somewhat hard to answer, but I'm hoping that enough users of this site have experience with Google's figures and methodologies that they can give me a reasonable enough estimate for me to decide whether to go ahead or not. Again, thanks in advance, it's much appreciated.

10% popularity Vote Up Vote Down


Report

 query : Re: Visitors sending cookies that were never set by the host Occasional visitors to my sites send one or more cookies that were never set by my hosts. Sometimes it's an obvious bot. Sometimes it's

@Holmes151

...cookies that were never set by my hosts


Is it possible that a previous version of the site, or even the previous owner of the domain, set cookies with a very long (years+) expiry?


Are there legitimate reasons why a user agent would deliver cookies to a host which never set them?


No. It would make no sense. It just bloats the request with meaningless data (unless the site in question is expecting it).


Do any of the popular human-powered browsers ever do this?


I very much doubt it. I've never seen this behaviour with the big named browsers.

However, it may be possible for a rogue browser extension to send these cookies? See security.stackexchange.com/questions/15259/worst-case-scenario-what-can-a-chrome-extension-do-with-your-data-on-all-websi

Sometimes it's an obvious bot.


My bet would be that it's a non-obvious bot testing for exploits.

10% popularity Vote Up Vote Down


Report

 query : Re: Buying second domain from different provider I have a domain a.com registered with one hosting company together with a hosting package. This is where I normally have all my websites. I now want

@Holmes151

The other answers haven't really covered the specifics...


Can I leave the domain with whois and just make it send visitors to my normal hosting provider without the url changing?


Yes. You can either:


Change the NAMESERVERS (NS records) on your domain at your new registrar (ie. "whois.com") to point to your existing host (assuming your host provides DNS services). Your host then handles the master DNS records for the domain. This is the easier option since your existing host will likely configure the necessary DNS records for you (A, CNAME, MX records etc.).


OR


Create an A record in the master DNS records at your registrar (assuming the NAMESERVERS currently point to your new registrar) that points to the IP address of your host. This points all HTTP/S traffic to your host.


In both cases you will need to configure your existing host to accept requests at this domain. If this is the only domain on a new website then you can probably just state the domain name during account creation. If this is an existing website and you are wanting to point additional domains at it then you'll need to create an additional VirtualHost or ServerAlias (in cPanel speak this would be an Addon or Alias domain respectively.


Or do I have to transfer it and if so how long does that take?


No, you do not need to do this. The only convenience is having your billing in the same place, but that would seem to defeat the point of having registered it at a cheaper price at an alternative registrar to begin with. For some TLDs (notably .com) you will need to pay for an additional years+ registration at the time of transfer.

Transfer can take anywhere from minutes to several days. Delays can be compounded if you are also changing NAMESERVERS.

10% popularity Vote Up Vote Down


Report

 query : Re: Could a previous owner's email spam prevent a new great site on that domain from ranking now? As far as I can tell there is no real connection between email spam and SEO, but I have a hard

@Holmes151

I can't answer your question from an SEO angle, but I have seen new domain owners impacted by the actions of previous owners, including having their new domains being included on blocklists, marked as fraudulent, on Phishing lists, etc. Though in the instances I've seen, these have been domains that were dropped and then re-registered within a few months.

10% popularity Vote Up Vote Down


Report

 query : Re: Full domain .htaccess 301 keeping same paths, but also redirect some URLs to different paths I believe the following code will redirect my old domain to my new domain keeping the same URL paths.

@Holmes151

You simply need to redirect these specific URLs before your catch-all redirect above. For example:

RewriteRule ^old-url-does-not-exist$ /new-url-that-does-exist [R=301,L]


Just to note... in .htaccess there is no slash prefix on the source URL pattern, ie. the regex ^old-url-does-not-exist$ matches the URL /old-url-does-not-exist. Whereas there is a slash prefix on the destination URL. That's just RewriteRule syntax.

10% popularity Vote Up Vote Down


Report

 query : Re: Google doesn't recognize a 404 status code There are 404 pages with two kinds of response headers (copypasted in full length from Chrome DevTools, Network tab): Response headers: cache-control:max-age=0,

@Holmes151

The HTTP response status is indicated by the very first line of the response (the "Status Line") - which you aren't currently showing in the output in your question. For a 404 response, you would expect to see something like:

HTTP/1.1 404 Not Found



status:404



The Status response header is non-standard.

10% popularity Vote Up Vote Down


Report

 query : How to add the Facebook Messenger Customer Chat Plugin to your site using Google Tag Manager I'm trying to install the Facebook Messenger Customer Chat Plugin onto my website using Google Tag

@Holmes151

Posted in: #Facebook #GoogleTagManager #Gtm

I'm trying to install the Facebook Messenger Customer Chat Plugin onto my website using Google Tag Manager so that it can be controlled by non-developers, however after creating a 'Custom HTML' block in GTM, it is rejecting the tag saying that the HTML is invalid.

This code works fine when added manually, but not when used inside GTM:

<!-- FACEBOOK MESSENGER CHAT BOX -->
<div class="fb-customerchat" page_id="xxxxxxxxxx"></div>
<script>
// Initialise Facebook SDK
window.fbAsyncInit = function() {
FB.init({
appId : 'xxxxxxxxxxx', // Facebook App ID goes here
autoLogAppEvents : true,
xfbml : true,
version : 'v2.11'
});
};

(function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {return;}
js = d.createElement(s); js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
</script>


And the error:


Invalid HTML, CSS, or JavaScript found in template.


What's the workaround for this issue?

10.01% popularity Vote Up Vote Down


Report

 query : Re: How to find out the referrer of Googlebot's crawling URL? Googlebot crawls 100s of 404 URLs from my website. I want to know from where it gets those links? Is there anything like HTTP Referrer?

@Holmes151

Googlebot doesn't send an HTTP Referer request header when it crawls your site.

However, the 404 report within Google Search Console should tell you from where these URLs are linked from:


Crawl > Crawl Errors > Desktop | Smartphone
Select the "Not found" sub tab.
The URLs that generate 404s should be listed at the bottom of the page.
Click on any one of these URLs to get a popup of more details of the error.
The "Linked from" tab should contain all the URLs that link to the 404 URL.


(Don't "Mark as fixed", since these are genuine 404s that can't be "fixed".)

10% popularity Vote Up Vote Down


Report

 query : Re: How to make a navigation search engine friendly? I have part of navigation in an HTML page that is routed by a select: <select id="archive" name="archive"> <option value="">Select

@Holmes151

I wouldn't be surprised if Google is already able to cope with this navigation "pattern", as it is reasonably common and Google is "good" at processing JavaScript these days.

However, this could be made more accessible (for both users and search engine bots) if you ensure that these form elements are indeed contained within a form, with a specific action and submit button (if you aren't doing this already). All these additional form elements can be overridden/hidden with JavaScript at the time your page is initialised.


There isn't any dynamic sitemap on the website.


Although this is certainly one instance where a sitemap (XML and/or HTML) would be beneficial.

Making sure these URLs are linked to from some other place would also be advisable.


<option value="1"><a href="http://mylink.com/01">001</a></option>



This doesn't look like a good idea. Is this even valid HTML? And if it's not valid HTML then browser behaviour could be erratic, causing you more problems.

10% popularity Vote Up Vote Down


Report

 query : Re: How to Delete URL Parameter from Web master tools via .htaccess file? My WordPress site, not that much huge content. Recent faced problem of High CPU bandwidth usage. Within seconds it gets

@Holmes151

It looks like you should probably be blocking these URLs (with URL parameters) in your robots.txt file, to prevent search engine bots (ie. Googlebot) from crawling these URLs in the first place. For example, to block all URLs with query strings:

User-agent: *
Disallow: /*?


Within Google Search Console (formerly Webmaster Tools) you can also explicitly tell Google how to handle each URL parameter. Under Crawl > URL Parameters. For example, your filter_display parameter might be defined as:


Does this parameter change page content seen by the user?
"Yes: Changes, reorders or narrows page content"
How does this parameter affect page content?
"Narrows"
Which URLs with this parameter should Googlebot crawl?
"No URLs" (or perhaps "Let Googlebot decide" if you trust Google, given the previous options)



How can i do that through .htaccess file?


You mentioned in comments that these URL parameters are "not important". However, they do look like they provide some user features (eg. filtering, sorting, ...)? In which case, you probably don't want to use .htaccess. Using .htaccess you could canonicalise the URL and redirect URLs with these URL parameters. This would completely remove these URL parameters from your site - which could even break your site functionality?



UPDATE: Your robots.txt file (copied from comments):


User-agent: *
Disallow: /*?

User-agent: *
Disallow: /

User-agent: Googlebot
Disallow:

User-agent: *
Allow: /wp-content/uploads/
Disallow: /cgi-bin/
Disallow: /tmp/
Disallow: /junk/
Disallow: /images/
Disallow: /wp-content/
Disallow: /index.php
Disallow: /wp-login.php



This would not work as intended. You have conflicting groups. ie. Three groups that all match User-agent: *. Bots only process one block of rules. The block that matches is the one that matches the "most specific" User-agent. The User-agent: * block matches any bots that didn't match any other block. From these rules Googlebot will simply crawl everything (unrestricted), including all your URL parameters - if this is causing problems for your server (as you suggest) then this is not what you want. And from these rules I would "guess" that all other bots will match the first User-agent: *

(But, even if you adopted different reasoning and assumed multiple blocks could be processed, this wouldn't make sense...?)

Depending on your requirements, this should be written something like:

User-agent: *
Disallow: /

User-agent: Googlebot
Allow: /wp-content/uploads/
Disallow: /cgi-bin/
Disallow: /tmp/
Disallow: /junk/
Disallow: /images/
Disallow: /wp-content/
Disallow: /index.php
Disallow: /wp-login.php
Disallow: /*?


I assume that, if this is a WordPress site, then you don't want even Googlebot to crawl everywhere?

From these rules, all other (good) bots are prevented from crawling your site.

10% popularity Vote Up Vote Down


Report

 query : Re: New URLs not being indexed After renaming my files, I can't see the new one indexed by Google when I do site://www.example.com. I am still getting the old files only, and these old files are

@Holmes151

...these old files are giving 404 as the file has been renamed.


You should implement (permanent) 301 redirects from the old URL to the new URL. Only by implementing a permanent redirect will the search engines (ie. Google) know that the old URL has changed to the new URL. This is essential in order to preserve any SEO on the old URL structure that you might have had.

Google doesn't necessarily see a 404 as a permanent error - it gives you a chance, as this might have been the result of a temporary (erroneous) change on your site. But eventually these old 404 URLs will drop out of the index - but it will take time. If you haven't implemented a redirect then these old URLs will simply drop out of the index. Without a redirect from the old URL to the new then you are essentially starting from the very beginning in terms of SEO, which can take weeks, or longer.

Even with redirects in place it can take time for the new URLs to be indexed, depending on how frequently Googlebot visits your site.



UPDATE: As mentioned in comments, if you simply want those old URLs to drop out of the search index quicker (and are not concerned with preserving SEO) then you can issue a 410 Gone HTTP response status instead of the usual 404 Not Found for these old URLs.


I can use header("HTTP/1.0 410 Gone"); in a file which we don't want but in my case files exist, I have only renamed them. Do you have any other solution to use 410 for my case?


If you have a custom 404 error document then you can do the same thing from your 404 error document - essentially turning all "404 Not Found" errors into a "410 Gone". For example, on Apache:

In .htaccess or your server config:

# Define your custom 404 error document
ErrorDocument /errordocs/e404.php


Then, inside /errordocs/e404.php:

<?php
// 404 error document, but override with a 410
header("HTTP/1.0 410 Gone");
?>
<html>
<head><title>410 Gone</title></head>
<body>
<h1>410 Gone</h1>
<p>This page has gone for good.... <a href="/">Home</a></p>
:


However, unless you identify all the "old" URLs, then this approach (or any approach) will result in a 410 Gone being served for all non-existent URLs - whether they existed previously or not (or may exist in the future). Note that, as suggested above, 410s are less forgiving by the search engines if you make a mistake.

10% popularity Vote Up Vote Down


Report

 query : Re: Do URL parameters create different URLs from Google's SEO perspective? I have the following 4 URLs. Are all these the same URL or different URLs from Google's SEO perspective? (Having a different

@Holmes151

Yes, those are all different URLs and are therefore different from Google's SEO perspective.

However, if these URLs return the same content then you need to canonicalise the URL by setting a <link rel="canonical" ... element in the head section of the relevant HTML document. This is to ensure that only the canonical URL is returned in the search results.

Within Google Search Console (GSC), you can also tell Google which URL parameters to ignore (if you are unable to set the canonical URL). However, this only informs Google, you may still want to consider other search engines.

10% popularity Vote Up Vote Down


Report

 query : Re: What is the best practice for redirecting the IP address of a web server when it hosts multiple domain names? I plan to have 2 separate domain names pointing to the same Apache installation

@Holmes151

The answer to your question "What is the best practice for redirecting the IP address of a web server when it hosts multiple domain names?" is this: Best practice is to use a redirect (Apache) or return an error 444 (Nginx) to prevent host header attacks.

Your second question about SEO is irrelevant and has nothing to do with the title of your question, but you should ask a new question "Can redirecting requests for a server's IP address influence SEO?" if it worries you.

Here is an example for Apache:

<VirtualHost *:80>
ServerName IP.AD.DR.ESS
Redirect permanent / www.example.com/ </VirtualHost>


And here is an example for Nginx:

server {
listen 80;
server_name IP.AD.DR.ESS;
return 444;
}


Both examples listen on port 80 only because redirecting HTTPS requests for the IP address on port 443 is not possible.

10% popularity Vote Up Vote Down


Report

 query : Changing the Name Servers is better. While it isn't common, the IP address for a server can change, if they do and you've set an A record, your site will go down if you don't update.

@Holmes151

Changing the Name Servers is better.

While it isn't common, the IP address for a server can change, if they do and you've set an A record, your site will go down if you don't update. On the other hand, if you set Name Servers and the IP is "embedded" into a Name Servers themselves, you won't have that same downtime if the IP changes.

10% popularity Vote Up Vote Down


Report

 query : Re: Changing nameservers from Weebly to another provider for email, will it affect my site? I have a Weebly site and I wan't a free custom domain email. There is an option of G Suite that's like

@Holmes151

Your site will go down if you change the Name Servers.

Ask Migadu if they have any MX records that you can apply to the domain instead. That is really all that is needed to transfer the email hosting to another provider. If they don't, you may want to seek out another provider.

10% popularity Vote Up Vote Down


Report

 query : Re: Can a domain be obtained and used without a domain registrar? I guess the greater question should be Why is a domain registrar necessary, but does one for ease of the average man. I understand

@Holmes151

OK, let's talk through how your web browser turns example.com into an IP address, assuming that it doesn't already have anything cached, which it undoubtedly will if you've ever done any DNS lookups before. Technically, it's your operating system doing this, not the browser -- the browser just asks the operating system and gets the response. And in fact you almost certainly have a caching DNS server somewhere downstream of you, so your computer will never actually need to do this itself, even the first time you use it.

First, it checks to see if already knows about example.com. Nope. How about example.com? Still no. How about just com? Still no. Now it would be stuck, except that your computer contains a hard-coded list of the IP addresses of the root DNS servers, which change only very infrequently. So it asks a root server to tell it about com, and the root server tells it what DNS servers are authoritative for com. Then it asks an authoritative server for com about example.com, and the com server tells it what DNS servers are authoritative for example.com. Then it asks an authoritative server for example.com for an A record for example.com, and gets the answer it needed.

The role of the domain registrar is to get your DNS servers listed on the authoritative server for com, as the authoritative servers for example.com, so that people's computers can find them. That's why you need a registrar.

10% popularity Vote Up Vote Down


Report

 query : Re: What are the disadvantages of using POP3, SMTP with Google instead of service like G Suite or Zoho Mail? G Suite is not free and Zoho Mail free plain has some limitations. But I can use

@Holmes151

They’re not comparable services. G-Suite and Zoho Mail host email for your own domain. Gmail just gives you a single gmail.com address. If you just want one email address in gmail.com then Gmail is fine, but if you want to use your own domain then you need something more, and there are very few free options.

10% popularity Vote Up Vote Down


Report

 query : Re: Does Googlebot add any parameters to a URL when crawling? I have set up a memory caching system that will use the URL as a key to cache a copy of a HTML page in memory in order to make

@Holmes151

Googlebot does not add any additional URL parameters of its own when it crawls your site.

The "complete" URLs that Googlebot crawls (which may or may not include URL parameters) are URLs that have been discovered, either on your site or on external sites that link to you.

If you find that Googlebot is crawling with unexpected URLs / URL parameters, it may indicate a misconfiguration on your own site or some other site(s) are targeting you and maliciously linking to keyword-rich URLs (if your site is susceptible) in order to control your SEO.

A related question, although not necessarily relevant to you unless you are using tracking parameters. Although these are perhaps URL parameters that could be ignored from your caching algorithm:


Is there a set of well-known tracking parameters besides utm_*?

10% popularity Vote Up Vote Down


Report

 query : Re: Is there any regulation on the yearly renewal price of a domain under a premium TLD? I am looking at a .xyz domain that is apparently considered 'premium'. Registrars quote around the 0

@Holmes151

No, there isn't really any regulation about that, Registries can change what they want. Your best bet is to contact the Registrar and ask them what the renewal price is. They should be able to tell you whether


It has a premium renewal price or if it's regular priced
The price, if it is premium

10% popularity Vote Up Vote Down


Report

 query : Re: How spammers find my email address even I registered domain with privacy protection I recently ( just before around 3-4 days) bought a domain name from namecheap with privacy protection. Assume

@Holmes151

Was the email sent to your email address or was it sent to generic@domain.com, which was then forwarded to your email address?

Many spammers will just hammer your domain with emails at any generic string they can think of, until they get hits.

10% popularity Vote Up Vote Down


Report

 query : Re: Google Analytics not working - Non-standard implementation in Google Tag Assistant I've pasted the following code immediately after the tag on my layout pages (using ASP.NET MCV Razor layouts,

@Holmes151

The filter was: include only traffic to the hostname that are equal to example.com.

Since my site uses www, this didn't work, because the hostname included www in this case.

Should have read this list of custom filter fields first, which defined exactly what they meant by hostname (a URL hostname): support.google.com/analytics/answer/1034380?hl=en
Also see here: www.gilliganondata.com/index.php/2012/05/22/the-anatomy-of-a-url-protocol-hostname-path-and-parameters/
Changed the filter to: include only traffic to the hostname that contain example.com.

Now it works, I can see traffic.

Although Google Tag Assistant is still giving me the non-standard implementation warning. Others have said it's buggy, I don't know, will try to fix later.

Update: Putting the Analytics code snippet at the end of the <head> tag instead of at the start makes no difference. Still getting Non-standard implementation.

10% popularity Vote Up Vote Down


Report

 query : Google Analytics not working - Non-standard implementation in Google Tag Assistant I've pasted the following code immediately after the tag on my layout pages (using ASP.NET MCV Razor layouts,

@Holmes151

Posted in: #GoogleAnalytics

I've pasted the following code immediately after the tag on my layout pages (using ASP.NET MCV Razor layouts, but that wouldn't have an effect, would it?):

<!-- Global Site Tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-111111111-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments)};
gtag('js', new Date());

gtag('config', 'UA-111111111-1');
</script>


This is exactly how it appears in the Google Analytics portal, except obviously that's not the real ID.

This code is the very first thing after the head tag, before even the meta tags, although I just read that that's probably not a good idea: stackoverflow.com/questions/1987065/what-are-best-practices-to-order-elements-in-head
Google Tag Assistant says I have a non-standard implementation, and the portal has sent me an alert stating that I'm not getting any hits, even though I know I am, and I've used the portal to send test traffic also.

I've got two IP filters on that exclude traffic, which shouldn't matter, and a filter to only look at traffic to the live domain (I've got a duplicate site on another domain for testing that uses the same code base).

The only other thing I can think of is because this site is hosted on Azure, even though I've configured a custom domain, maybe I'm getting no traffic because Google Analytics is seeing the mywebsite.azurewebsites.net domain rather than the custom domain, and the filter is for the custom domain? But that would be a bit odd.

The site has had the analytics code in it for about three days now.

What am I doing wrong?

10.02% popularity Vote Up Vote Down


Report

 query : Does google crawl and rank content in an amp-list? I'm building an AMP page. I want to use amp-list and dynamically fetch content for the list from a json endpoint. So if I have a series

@Holmes151

Posted in: #Amp #Google #Seo

I'm building an AMP page. I want to use amp-list and dynamically fetch content for the list from a json endpoint. So if I have a series of products on my home page in an amp-list, will google list the page when users search for those products?

10% popularity Vote Up Vote Down


Report

 query : Have two paths with a varying URL in Google Analytics goal funnel How do I track which application form generated goals based on a different URL in one step of the goal funnel? Lets say

@Holmes151

Posted in: #Analytics #Goals #Google

How do I track which application form generated goals based on a different URL in one step of the goal funnel?

Lets say that there are two ways to fill the form and make the conversion:


/register
/msignature
/proceed
/thankyou


and another:


/register
/upload
/proceed
/thankyou


The second step is different in the two cases. I want to see not only the conversions, but also see how many people chose one or another way to make a conversion. That way I can measure the effectiveness of each way user can choose.

10.01% popularity Vote Up Vote Down


Report

 query : How create GA report for list of URLs Noob question here... I would like to create a report or be able to view pageviews and unique pageviews for a list of about 10-15 URLs on my web site.

@Holmes151

Posted in: #GoogleAnalytics

Noob question here... I would like to create a report or be able to view pageviews and unique pageviews for a list of about 10-15 URLs on my web site. I would like to be able to see the data broken out for each URL. The pages do not share a common URL structure. Here's an example of a few:

example.com/ppc-ase-1/ example.com/cotf-thank-you/ www.example.com/rt-sis-1/


Can anyone point me to how to do this in GA?

Thanks for any help you can offer...

10.01% popularity Vote Up Vote Down


Report

 query : What are these many requests referred from Facebook with a changing s= parameter? I just recognised a peak in our access logs and I'm curious how to explain this or if my suggestion how to

@Holmes151

Posted in: #Facebook #WebCrawlers

I just recognised a peak in our access logs and I'm curious how to explain this or if my suggestion how to explain is right.

There about 600 requests in 3 minutes with a referrer like this:
l.facebook.com/l.php?u=http%3A%2F%2Fwebsite.tld%2F%3Fs%3D23842660217050537&h=ATPGZAKFqpZ7hfY6SgKTDAE3WGgI7-kgN1nq8GtniXf-mxzE8aeMvjTbcze7mNvbjYZrOI5FDHiZa-VwQiW3_jN_sndIh71MOAg5yYrOFoo
The number in the s= parameter within the u= parameter, in this case 23842660217050537, is changing every request. (The s= parameter has no function on the website).

My guess is that somebody is posting many URLs with the changing s parameter in a Facebook chat or post window to make these requests. Maybe to crawl something or maybe DDoS the website. BUT the request on our site even executes the JavaScript and gets tracked by Google Analytics, in addition to this the user-agent and the request language changes with every request. The IP Range those requests where made from are Facebook owned addresses.

Isn't facebook doing a rate limiting for those requests?

Is this some kind of extremely ineffective reflection attack or more likely an attempt to crawl our website with requesting the same page?

10.01% popularity Vote Up Vote Down


Report

 query : Re: SEO impact of hosting static HTML on Amazon S3 vs. webserver Scenario: Updating a static HTML file hosted on AWS S3 bucket. I read that a file (including static HTML) cannot be updated on

@Holmes151

You may be misunderstanding the concept, here.

Files on S3 cannot be modified, but they can be overwritten, and overwriting a file does not require deleting the old file, first. You simply upload a new file with the same name.

The old file does not go away unless and until the new upload is complete and successful. A failed or partial overwrite of an existing object in S3 will never corrupt the existing object. If an overwrite fails due to any cause, such as losing your Internet connection during the upload, S3 discards the failed upload and the original object remains untouched.

Any downloads of the old file are allowed to finish, and downloads of even large files are not interrupted or corrupted if the file is overwritten with a download in progress.

There's no SEO impact potential, here.

10% popularity Vote Up Vote Down


Report

 query : Re: Google shows my store/website on the right but not on search results, is this ok? I have this local shop on Google Places/Maps and there's a website and everything. When people search for the

@Holmes151

Is this how it works? Is this expected?


Yes and yes. Congratulations, Google has recognised your business listing.


Or should it appear on search results too


It arguably already does appear in the search results. Displaying all that information in amongst the actual search results (on the left) would be cumbersome.

Or I assume you mean that you are expecting to see at least a link to your website in the SERPs? Yes, you would indeed expect this. However, since your site is "just a single page with a logo and a few contact options", I expect Google does not yet think this is index worthy. Add some meaningful content etc. and you can expect to see your site in the SERPs in time.

10% popularity Vote Up Vote Down


Report

 query : Re: Are there other options besides HTTPS for securing a website to avoid text input warnings in Chrome? One week ago Google sent me an email to go HTTPS. If I won't transfer HTTP to HTTPS then

@Holmes151

is there any other way to make my connection secured ?


Google isn't just complaining about "security" (which could include a number of different topics), it is specifically targeting encryption / HTTPS. With plain HTTP the connection between the client and server is unencrypted, allowing anyone to potentially see and intercept anything that is submitted. It would normally only prompt with this if you are allowing users to login (ie. submitting username/password) or submitting payment information over an unencrypted connection. General "text" form submissions would not necessarily be a problem. However, as @Kevin pointed out in comments, Google/Chrome plan to extend this in the future:


Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.


Installing an SSL cert on your site (or using a front-end proxy like Cloudflare to handle the SLL) is the only way to encrypt the traffic to your site.

However, this isn't necessarily a "costly process" these days. Cloudflare have a "free" option and Let's Encrypt is a free Certificate Authority that many hosts support by default.

10% popularity Vote Up Vote Down


Report

Back to top | Use Dark Theme