logo vmapp.org

@Shakeerah822

Shakeerah822

Last seen: Tue 11 May, 2021

Signature:

Recent posts

 query : Invalid URLs with 5 character prefixes I've been getting lots of attempts to access pages on my site with 5 upper/lowercase chars as the first path. For example: /aOKTW/validpage That first

@Shakeerah822

Posted in: #Hacking #Url

I've been getting lots of attempts to access pages on my site with 5 upper/lowercase chars as the first path. For example:

/aOKTW/validpage


That first 5 character path is always different. I suspect this is some type of hacking attempt, but can't figure out what. Anyone seen this?

At the moment I'm generating 404s but would like to be a bit broader in blocking this traffic and subsequent URL requests.

10% popularity Vote Up Vote Down


Report

 query : Why does Google index two copies of a website, one on a development URL with the development site ranking better? I only found one question similar to mine, but it is for Yahoo search. Mine

@Shakeerah822

Posted in: #Google #Seo #SiteDeployment

I only found one question similar to mine, but it is for Yahoo search. Mine is regarding Google. My question is also a little extensive.

Background: I created a website for a client. In order to test it and check the client was happy before publishing, I would upload the website (from VS) to my Azure hosting account via Publish.
When it was complete, I would publish to mine, and then I would log into my client and publish the exact same website. They were identical. The dot. Every white space was identical.

My client has his domain from GoDaddy and paid for the extra service of search engine visibility. I had submitted his website to Google and told himto wait a few days before finding it.

To my suprise, I found my website on Google because it came up with the same description. Im not confused as to why it was on Google, but I am confused as to why my webiste was significally higher in the rankings. I think mine was on page 2 and his was on 4 or 5.

Why and how did this happen?
Some information:


It's one project on VS that was published to two different
hosting accounts. So it cant be anything to do with some settings or
hidden files. Also same keywords, text, alt tags etc. The ranking
would be the same when it came to these kind of things.
Neither of us have an SSL.
Both websites are hosted on the same company
(Azure) on different accounts.
Both our domains (on separate
accounts) are on GoDaddy
He pays for extra for SE visibility. I
dont.
His domain name includes key words searched. Mine is my own
portfolio so no relevent key words to his company within the domain
name.


Question: Why is my website significally higher in ranking?

In addition: I have since updated some text in the website description, as I noticed on his, Google was showing the description with a typo. I fixed this. I have also updated my own website to something completely new. It's a new project so even things like meta data would not be left behind.

However, Google was STILL showing the typo in the description to his website (so basically Google hadn't been updated) and my website still ranked higher than his.

I went to HIS goDaddy and resubmitted the website. The typo hassince dissapeared. However my website is still showing and ranking higher than his. Although I must say now he is only one behind me now. (i.e if Im number 12 he is number 13 and on the same page) I cant do a submit because I never did one in the first place. So how did Google find my website originally, and when will it update? It's been about 3 weeks.

To clarify: my website now is "Coming soon" with no content. And I published this on a new hosting account (just moved over the domain). So I cannot do any redirect, 303, or anything else because that copy of his website on my account no longer exists

10.02% popularity Vote Up Vote Down


Report

 query : What are standard pixel widths a webpage would use for device detection? What specific pixel width (or height) values should I use to distinguish between a mobile phone, a tablet, and a desktop

@Shakeerah822

Posted in: #Css #MediaQueries #Resolution #WebDevelopment #WebsiteDesign

What specific pixel width (or height) values should I use to distinguish between a mobile phone, a tablet, and a desktop using Media Query?

Are there more professional methods than just pixel width detection?

10% popularity Vote Up Vote Down


Report

 query : Re: How to fix two PHP errors I cannot get the plugin author to answer I am the administrator of my website and when I turn debug on to track down a problem on the site, the debug.log is cluttered

@Shakeerah822

Since your "code works OK" and these are just E_NOTICE messages (as opposed to warnings or errors) then you should be able to modify your code like the following in order to workaround these messages:


Trying to get property of non-object...


We need to check that the $post variable is of the expected type before attempting to process it.

// Replaces the_author_link() output with your custom entry or return the logged in user if there is no custom entry
function custom_author_uri( $author_uri ) {
//global $authordata;
global $post, $authordata;
if (is_object($post) && property_exists($post,'ID')) {
$custom_author_uri = get_post_meta($post->ID, 'uri', TRUE);
if ($custom_author_uri) {
return $custom_author_uri;
}
}
return $author_uri;
}
add_filter( 'author_link', 'custom_author_uri' );



Undefined index: author_noncename


function cab_save_postdata( $post_id ) {
global $post, $cab_new_meta_boxes;

foreach($cab_new_meta_boxes as $meta_box) {
if (empty($_POST[$meta_box['name'].'_noncename'])) {
return $post_id;
}
if ( !wp_verify_nonce( $_POST[$meta_box['name'].'_noncename'], plugin_basename(__FILE__) )) {
return $post_id;
}

if ( 'page' == $_POST['post_type'] ) {
if ( !current_user_can( 'edit_page', $post_id ))
return $post_id;
} else {
if ( !current_user_can( 'edit_post', $post_id ))
return $post_id;
}

$data = $_POST[$meta_box['name']];

if(get_post_meta($post_id, $meta_box['name']) == "")
add_post_meta($post_id, $meta_box['name'], $data, true);
elseif($data != get_post_meta($post_id, $meta_box['name'], true))
update_post_meta($post_id, $meta_box['name'], $data);
elseif($data == "")
delete_post_meta($post_id, $meta_box['name'], get_post_meta($post_id, $meta_box['name'], true));
}
}
add_action('admin_menu', 'cab_create_meta_box');
add_action('save_post', 'cab_save_postdata');


Maybe you could fail sooner in this second function - but without knowing the code, that is difficult to say. You may need to add further checks if you are still getting "Undefined index" messages.

(Although the nagging thought in the back of my mind is why these functions are being called at all in such circumstances?)

These changes shouldn't make the code run any differently (since you said it "works OK") - it simply avoids the nagging E_NOTICE message(s), assuming that these conditions are normal and expected.

As suggested in your other question, why you are getting these E_NOTICE messages now may simply be a difference in the default error_reporting level after updating to PHP 7. (?)

10% popularity Vote Up Vote Down


Report

 query : How can I stop this PHP Notice or fix the problem that clutters up the debug.log after switching to PHP 7? EDIT: I am the administrator of several websites and when I turn debug on to track

@Shakeerah822

Posted in: #Php #Wordpress

EDIT: I am the administrator of several websites and when I turn debug on to track down a problem on one of the sites, the debug.log is cluttered with hundreds of lines of PHP notices each day about a PHP problem in a plugin. The repeated notices obscure the debug information I am looking for to fix an important problem. I tried contacting the author of the plugin through the plugin's support forum to get a fix so I can stop the buildup of the log but there are no responses to questions in the forum for the plugin.

What do I need to do to suppress or fix this undefined index error in the WordPress plugin so it stops adding hundreds of PHP notices in the debug.log when I have debug turned on?

The plugin has a function to check if the browser is mobile. Since switching to PHP 7, I started getting the following PHP Notice:


Undefined index: HTTP_ACCEPT in /plugins/dynamic-to-top/inc/dynamic-to-top-class.php on line 440


This notice was not generated with PHP 5.6 so I thought something had changed in PHP 7 for this line to generate that notice? The answer below says it is not a change in PHP 7 that generated the notice but a more thorough reporting method. The following is the line that is called out in the notice.

if( preg_match( "/wap.|.wap/i", $_SERVER["HTTP_ACCEPT"] ) )
return true;


I checked the PHP Manual and HTTP_ACCEPT is a correct element for $_SERVER.

The full function is

function is_mobile() {

if( isset( $_SERVER["HTTP_X_WAP_PROFILE"] ) )
return true;

if( preg_match( "/wap.|.wap/i", $_SERVER["HTTP_ACCEPT"] ) )
return true;

if( isset( $_SERVER["HTTP_USER_AGENT"] ) ) {
$user_agents = array(
"midp", "j2me", "iphone", "avantg", "docomo", "novarra", "palmos",
"palmsource", "240x320", "opwv", "chtml", "pda", "windows ce", "mmp/",
"blackberry", "mib/", "symbian", "wireless", "nokia", "hand", "mobi",
"phone", "cdm", "up.b", "audio", "SIE-", "SEC-", "samsung", "HTC",
"mot-", "mitsu", "sagem", "sony", "alcatel", "lg", "erics", "vx", "NEC",
"philips", "mmm", "xx", "panasonic", "sharp", "wap", "sch", "rover",
"pocket", "benq", "java", "pt", "pg", "vox", "amoi", "bird", "compal",
"kg", "voda", "sany", "kdd", "dbt", "sendo", "sgh", "gradi", "jb", "dddi", "moto" );

foreach( $user_agents as $user_string ) {
if( preg_match( "/" . $user_string . "/i", $_SERVER["HTTP_USER_AGENT"] ) )
return true;
}
}

do_action( 'mv_dynamic_to_top_check_mobile' );

return false;
}


Why isn't it a defined index?

10.01% popularity Vote Up Vote Down


Report

 query : Re: How to allow google to index my deep pages? Let's say I'm building an open wiki system. Each user can start a new page, give it a name like "how-to-travel-faster-than-light" which will turn

@Shakeerah822

How would google ever find and index the "how-to-travel-faster-than-light" page if it can't find it, ...


If it can't find it; it can't index it. It's that simple. However, Google can potentially discover URLs in many ways, from scanning emails in Gmail to URLs typed into its Chrome web browser (although these methods are naturally unreliable if you are trying to get a URL indexed).


How did wikipedia or stack exchange solve this problem?


Well, Wikipedia does have numerous Contents and hierarchy Indices, as well as a complete alphabetical index - so it would certainly seem spiderable. It is also very well cross-linked and inbound links are second to none. It might even have a sitemap (although I can't find it), as it's still well within the 2.5 billion sitemap URL limit (2017 figures) as set by Google.

The Stack Exchange Questions page - that lists new questions first (and active questions rise to the top) is naturally a spiderable index of all pages/questions on the site. There is also an RSS question feed. And, until recently, there was an XML sitemap. (The sitemap URL stated in the Webmasters robots.txt seems to result in a 404 currently?)


There is no site map listing every link to every user page.


Why not? This is a standard way of informing search engines about hard to reach pages.

When pages are created you can ping Google and other search engines to notify them of an update. They will then request your updated (auto-generated) sitemap.

Just having Google Analytics installed on your site will at least allow Google to discover URLs the very first time you (or the author) visits the page.

10% popularity Vote Up Vote Down


Report

 query : Re: isAccessibleForFree and cloaking I need an advice regarding isAccessibleForFree and cloaking. I'm running a subscription based news website which has standard free to view teaser (short description)

@Shakeerah822

Does this mean that no matter if the user is logged in or not the server should send the full article which will hide content, if user is not logged in just via CSS?


No, at least that is not what I take from the linked document. No CSS "hiding" is involved.

You only deliver the "full (paywalled) content" to authenticated/subscribed users and verified Googlebots (if you want Googlebot to index the paywalled content). (Note that verifying the Googlebot is more than simply checking the User-Agent. You are also validating the IP address using reverse/forward DNS lookups - which should then be cached for a period.)

It is the schema.org JSON-LD markup in your content that enables Googlebot to differentiate this from cloaking.

10% popularity Vote Up Vote Down


Report

 query : Re: Countering a DMCA removal from Google Search for a root category page A category page got a Notice of DMCA removal from Google Search, submitted through Lumen Databases, for a copyright infringing

@Shakeerah822

First some background information, as you are noticing a pattern.

Generally speaking a "Notice of DMCA removal from Google Search" is submitted by a 3rd company that claims to be specialized in finding infringing content and submitting requests to remove it. Such notices and removals of links from Google search are not the initiative of Google or based on Google's own bots or spiders.

However, the companies that are specializing in collecting URL's to infringing content often simply have bots or spiders running over the internet, querying search engines with broad generally infringing terms, and note down every URL that has closely matching keywords (read content/text), and is not yet white listed by their system.

Thereafter, they automatically generate reports in Google's required format, and submit a removal request.

Google thereafter has a (human or automatic - not sure) process to verify the claims, and if in doubt, removes the links to the content without warning.

This in turn also affects your SEO rankings, as a large percentage of removed URL's vs. total URL's of your website, could mean you are a copyright infringing website.

How to counter notice

Google has a fairly simple Counter Notice system and it works well. Generally speaking when you submit a counter notice, which can be done by following the link from the e-mail where Google notifies you of a DMCA removal, Google follows up within 48 hours. In this form of Google, you fill out why you believe/claim the removal was in error. After submission of your counter notice, Google confirms that they received your request. This request is then sent to the company that asked for the removal of your URL. They always reply, as they are obliged to, if they don't reply within time (14 days), Google re-instates your URL within generally 10 days after that (this is from experience, not from what's claimed on the internet). If the company does reply, they could likely ask you for further information, ask for your legal contact, vice versa. In the end if your URL was requested to be removed in error, the company could white list your URL, this is up to the sole discretion of the company and the strength of your claim and your ability to prove that you will not likely have infringing content on your site.

Considerations

Your site has user generated content. This is something often red flagged by such companies and hard to white list. Do you have proven processes mechanisms that control user generated content before it appears on the web? This is perhaps one of the most important considerations for companies to allow white listing you. Do you have a good record just like YouTube has? Another important factor, before starting to compare yourself with YouTube.

I hope this answer helped, and good luck reinstating your URL's.

10% popularity Vote Up Vote Down


Report

 query : Re: Can the content attribute of be left empty? Is <meta name="robots" content="" /> a legitimate meta, exactly equivalent to: <meta name="robots" content="index, follow" /> <meta name="robots"

@Shakeerah822

option 3. Nothing is by far the best approach.


If you mean that the robots tag should be omitted entirely, then yes, that would be the best option if you want the page to be indexed and followed.

The other options just add superfluous bytes and will be ignored.

As mentioned in my answer to your other related question, Google only includes all (out of what you have posted) in its list of valid directives. But as Google states, "this has no effect" anyway, as this is the default value. Google (and I suspect all search engine bots) simply ignore index, follow - since this is again the default behavior.


<meta name="robots" content="" />



An empty content attribute is valid HTML. However, it contains no directive, so can only be ignored (as stated above).

10% popularity Vote Up Vote Down


Report

 query : Proper way of allowing Apache to serve from an encrypted folder (/home dir) I have a folder in my home folder, /home/user/mywebsite. My home folder was encrypted when I installed Ubuntu. I

@Shakeerah822

Posted in: #Apache #Linux #Server

I have a folder in my home folder, /home/user/mywebsite. My home folder was encrypted when I installed Ubuntu. I have a symlink in /var/www/fleet -> /home/user/mywebsite.

My configuration file:

<Directory /var/www/fleet>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>



When I try to access, I get 403 forbidden.


apache error.log [Fri Dec 01 16:08:28.100927 2017] [core:error] [pid
23024] [client 192.168.168.9:50328] AH00037: Symbolic link not allowed
or link target not accessible: /var/www/fleet


I know it's a permissions issue since I can run this command:

sudo -u www-data ls /var/www/fleet
ls: cannot access /var/www/fleet: Permission denied




what is the correct way to allow serving encrypted folder? Do I add www-data user to my user group?

10% popularity Vote Up Vote Down


Report

 query : Re: How to access a website through a Local Area Network that localhost change to a domain name I have a problem with accessing the website that I created on XAMPP server from another device on

@Shakeerah822

Note that 127.0.0.1 is special. It always refers to the "localhost", ie. the computer/device you are currently making the request from. You need to use the private IP address of the computer on your LAN.
@dan touched on the required method in comments:


You'll need to create an A record in your domain's DNS table and point that to your router's public IP address, then forward port 80 to the private IP (e.g., 192.168.0.2) of your local computer running Apache. You'll also need to disable any firewall rules for port 80 on that computer. Easier than all the above however is just to use the private IP address as the URL in your other devices, but you'll still need to disable the firewall for the computer running Apache.


However, if you only need to be able to access your site from other devices on the same LAN then you can simply set the public A record in the DNS to the private IP address (eg. 192.168.0.2) of your local computer running your web server. Since you are only dealing with devices already on your LAN you probably don't have any firewall settings to update (depending on the size and complexity of your LAN) - only devices on the LAN will be able to access the internal/private IP address.

(If you use the private IP address directly - as dan suggests - then you'll only be able to enable one development site at a time on your server.)

It would be easier to implement this as a subdomain of your maindomain, rather than take over the entire domain. This would then enable your local development server to be available (only from your LAN) at the same time as the live public website. eg. Your main public website is available at example.com and your local development server is available at local.example.com (which is simply configured as an A record to the private IP address of your web server on your LAN).

Using name-based virtual hosts on your development server, you would then define the subdomain as the ServerName:

<VirtualHost *:80>
ServerName local.example.com
DocumentRoot "C:xampphtdocsExample"
</VirtualHost>


Using this method allows any devices on your LAN to access your site, without having to edit individual HOSTS files (or override DNS) on those devices. Note that on some mobile devices, you'll need to disable the "data saver" option, as this requires the website to be public (the "data saver" servers will simply fail to make the required requests).

10% popularity Vote Up Vote Down


Report

 query : Re: [SEO, Breadcrumbs]: Homepage Breadcrumb My question is simple, do I make Breadcrumb for Homepage or not?

@Shakeerah822

It depends on your site structure.

However, on most sites, the homepage is a navigational hub for the rest of the site, so it makes sense to include the homepage at the head of the breadcrumb trail in this case.

10% popularity Vote Up Vote Down


Report

 query : Individual Unique Pageviews not matching the funnel step view I have set up a goal funnel where the destination page option is Regular Expression. In my first required step of the funnel, I

@Shakeerah822

Posted in: #GoalTracking #GoogleAnalytics #PageViews

I have set up a goal funnel where the destination page option is Regular Expression.

In my first required step of the funnel, I have 9 pages and so have used regex to put all of those in the input. But now when I compared the unique pageviews of all those pages(through all pages report), I found a difference of about 10% in the numbers (Unique pageviews > funnel views).

This is the regular expression -

/content/jsp/investor/MFBuy.do?method=displayAPURDetails|/content/jsp/investor/MFBuy.do?method=displayPURDetails|/content/jsp/plan.do?c=checkFacta|/content/jsp/planOneTimeInvest.do?method=getConfirmation|/content/jsp/Systematic/SIPAction.do?method=displaySIPDetails|/content/jsp/Systematic/FlexiSIPAction.do?method=displaySIPDetails|/content/jsp/Systematic/StepUpSIPAction.do?method=displaySIPDetails|/content/jsp/Systematic/AlertSIPAction.do?method=displaySIPDetails|/content/jsp/Systematic/SIPINSAction.do?method=displaySIPDetails

10% popularity Vote Up Vote Down


Report

 query : Rich snippets on product category page without AggregateRating In the past we used rich snippets on product category pages (filtered themed product list) For this we listed from-to price. Description.

@Shakeerah822

Posted in: #RichSnippets #SchemaOrg #StructuredData

In the past we used rich snippets on product category pages (filtered themed product list)

For this we listed from-to price. Description. Thumbs. Availability. And AggregateRating.

AggregateRating is no longer allowed. Or stated otherwise you can do it but then google will drop all your snippets.

That being said. Here is my question

What is the alternative to product category pages without! AggregateRating? (Not marking up individual products)

I mean I can share title, product count, lowest and highest price, availability, description, thumbs etc. all in snippets. How would this snippet look and does any one have experience with this?

Ok AggregateRating is not allowed. But it seems illogical that there now is no rich data at all when I read that google like snippets because they explain what is on the page.

Thanks!!

10.01% popularity Vote Up Vote Down


Report

 query : Goal Funnel Conversion - Avg for view In Google Analytics, while visualizing the goal funnel - what is the no. that is under the header "Avg for view"? Is it the goal conversion rate from

@Shakeerah822

Posted in: #Conversions #GoalTracking #GoogleAnalytics

In Google Analytics, while visualizing the goal funnel - what is the no. that is under the header "Avg for view"?



Is it the goal conversion rate from the first required step of the funnel? If so, why is it different from funnel conversion rate?



Basically, the 829 sessions as shown here are all the hits that occurred on the goal destination URL and not only from the funnel. (As per 1st point from LunaMetrics article as well)

When I divide this no. from the pageviews of my first required step in funnel, I get this "funnel conversion rate" no. (which definitely seems to be odd).

From Kissmetrics article -


Funnel Conversion Rate

After you have set up your goal and funnel, and your profile has had some time to collect data, the Funnel Visualization report will display perhaps the single most definitive funnel performance metric in Google Analytics: the Funnel Conversion Rate. If, during funnel setup, you made Step 1 of the funnel required as recommended above, the Funnel Conversion Rate indicates the percentage of visits that included at least one pageview of the first step before at least one pageview of the goal page.

In addition to the overall Funnel Conversion Rate, you can use the report to assess step-to-step drop-off.




These multiple explanations are a little confusing and I couldn't seem to find answer in Google Support Docs as well.

10.01% popularity Vote Up Vote Down


Report

 query : Is it good SEO to 302 redirect from root URL to language and region subdirectory for the user, then tell Google about altenate sites in the sitemap? We use subdirectories on a global top level

@Shakeerah822

Posted in: #301Redirect #302Redirect #Seo #Sitemap

We use subdirectories on a global top level domain for a multi-regional site:


When a user from Iran enters exmple.com it 302 redirects to example.com/fa-IR. When user enter from the US enters exmple.com, it 302 redirects to example.com/en-US.

In the URL section of our sitemap.xml we have something like:

<loc>http://www.example.com/</loc>
<xhtml:link rel="alternate" hreflang="fa-IR" href="http://www.example.com/fa-IR/" />
<xhtml:link rel="alternate" hreflang="en-US" href="http://www.example.com/en-US/" />
<xhtml:link rel="alternate" hreflang="x-default" href="http://www.example.com/fa-IR/" />


This pretty much how other large websites such as Microsoft's seem to work. Is the 302 redirect correct for SEO? Is our approach with the sitemap correct?

10.01% popularity Vote Up Vote Down


Report

 query : Wordpress Launcher on Google Cloud Platform - 404 Error I'm testing out the Google Cloud Platform - normally I use dedicated servers and handle everything manually. So it's a bit different for

@Shakeerah822

Posted in: #Cloud #Google #Wordpress

I'm testing out the Google Cloud Platform - normally I use dedicated servers and handle everything manually. So it's a bit different for me to press a button and have a utility set up all the settings.

I created a new WordPress blog on a brand new URL. In general it went swimmingly - the blog popped live, uses my new URL, and displays the content.

You can look at individual pages for entries, move through them, and so on.

However, bizarrely, if you look at the bottom of the main page where there is the "1", "2", and so on for the pages of posts, THOSE do not work. It says:

The requested URL /page/2/ was not found on this server.

It's on the exact same URL and everything else works fine. Why would this one normal feature not work? Any ideas? I've tried googling around for answers and am stuck.

Thanks!

10.01% popularity Vote Up Vote Down


Report

 query : Re: SEO executive .htaccess file knowledge Should SEO executive in a company have knowledge of .htaccess file to make changes in that file or it is the task of developer in the company to make

@Shakeerah822

.htaccess is not the developer issue, This is the need and complete by Programmer. .htaccessuse with Apache and web application, in SEO .htaccesswe are using in URL rewriting, Redirection pages, 404 pages fix and some other tasks.

10% popularity Vote Up Vote Down


Report

 query : Re: How to use AWS S3 Route 53 with Gmail business domain? I have a business domain hosted in google Daniels@mycompany.com and now I want to make my website hosted in AWS S3 hosting. Problem is

@Shakeerah822

Step 1:

In the AWS Route 53 console, get the name of the four name servers they provide, you will need these for step two.

Step 2:

Log in to the website of the Registrar from whom you bought your domain name. Input those four name servers provided my Route 53 into the Registrar's management console. You do not need any other record of any type whatsoever at the Registrar, so you can remove everything else or leave it, makes no difference.

Step 3:

Return to the AWS Route 53 console and add your MX records for Google, described here on Google's help page.

Step 4:

Continuing to work in the AWS Route 53 console, add any other settings you need such as A and CNAME records. Describing these is beyond the scope of your question but you can find this information online easily.

Step 5:

Wait at least an hour for your settings to 'propagate', which is just a fancy way of saying it takes a while for everyone else's computers to notice your settings have changed. Test here if you want something to do while you wait: www.whatsmydns.net/
That's all, you are done. If you made errors in your DNS settings on Route 53 you have to look at the documentation from your email and/or web hosting providers and make sure you followed these precisely.

10% popularity Vote Up Vote Down


Report

 query : Can uploaded files override Apache permissions? I teach web development courses to liberal arts majors, and the web server (Ubuntu Apache) is located in my office. At the beginning of each semester,

@Shakeerah822

Posted in: #Apache #Ftp #Permissions #Ubuntu

I teach web development courses to liberal arts majors, and the web server (Ubuntu Apache) is located in my office. At the beginning of each semester, I create public_html directories for each of my students and recursively set the permissions so their files will be served up correctly.

Every once in a while, a student reports permissions problems with a file she has uploaded. It is easy enough to fix this by tweaking permissions in Filezilla, but I have always wondered about the apparently inconsistent nature of this problem.

Possible explanation 1. I screwed up when creating the account and did not actually remember to recursively set the permissions.

Possible explanation 2. The uploaded file already had certain permissions attached to it that override the permissions I had set on the server.

Based on everything I think I understand about web servers, the first option seems far more likely. Is possible explanation 2 even theoretically possible?

Thanks!

10.01% popularity Vote Up Vote Down


Report

 query : Re: Google Analytics: Exclude our own developers, any other method except IP? I am a developer and I am setting up Google Analytics on our application. I would like a way to exclude all our

@Shakeerah822

The solution is to have the developers block Google Analytics with a browser extension, instead of trying to do this from within Google Analytics itself. An ad blocker subscribed to a "block list" of third-party trackers will do, or try something more targeted like the EFF's Privacy Badger.

Like it or not, usage of tools like these is increasing and it is probably best practice to test your site with them anyway. As an additional bonus, your developers will have more privacy online and will be less vulnerable to malware distributed via ad networks, which is a recurring problem.

10% popularity Vote Up Vote Down


Report

 query : Remove cached domain / pages from Google The website's domain I work for is https://www.utazzitthon.hu and there is an other company in partnership with us, they have a domain that is directing

@Shakeerah822

Posted in: #Googlebot #GoogleCache #Noarchive #Noindex #Seo

The website's domain I work for is www.utazzitthon.hu and there is an other company in partnership with us, they have a domain that is directing to our server and our content, but with this domain szallas.kutyabarat.hu. Only 3 type of pages should be seen under that domain, all the other should not appear in Google.

Maybe in the beginning the noindex wasn't set everywhere and 3000+ pages were cached in Google. site:szallas.kutyabarat.hu

Do you have any idea how could I remove them as soon as possible so it wouldn't weaken our main domain as a matter of SEO factor?

I changed the meta to this now:

<META NAME="ROBOTS" CONTENT="noindex, nofollow, noarchive, nosnippet">


Is the only way waiting for Google to crawl the site?

10.01% popularity Vote Up Vote Down


Report

 query : Re: How can you get a thumbnail when you share a PDF on Facebook One of my clients writes cookbooks. Some time ago, he wrote an article about the origins of Boston Cream Pie, which he has on

@Shakeerah822

No, there is no such implementation. Open Graph metadata comes from specifically formatted links on a web page, and not from the PDF document itself (or a link to download it). You must create a dedicated page for the PDF to be downloaded from, and put the Open Graph metadata on that page.

Facebook's reference documentation page is here, and as you can see it does not include anything about PDFs: developers.facebook.com/docs/reference/opengraph/

10% popularity Vote Up Vote Down


Report

 query : Arabic keywords in URL in two levels I know that localized Arabic keywords in URL are really good for SEO, but my question is what if I want to localize the category also, let us say that

@Shakeerah822

Posted in: #Internationalization #Localization #Rtl #Seo #Url

I know that localized Arabic keywords in URL are really good for SEO, but my question is what if I want to localize the category also, let us say that my website structure is like blog/category/post, if URL is localized in Arabic then it will look like this blog/منشور/تصنيف which is good for the looking eye but since the URL gets encoded and decoded it's original form is like this blog/post/category( in Arabic ).

In other words in order to generate a link in Arabic that looks like this blog/category/post the URL should be structured like this blog/post/category. Is this good or bad for SEO? To the naked eye and on search results the URL should look naturally like this blog/تصنيف(category)/منشور(post) but in reality it is structured in this way blog/منشور(post)/تصنيف(category)?

10.02% popularity Vote Up Vote Down


Report

 query : Re: Will removing footer links hinder my site's SEO performance? I have a site which has key location landing pages in the footer which have been there for a long time. I know that these are

@Shakeerah822

You should be fine. Where I've seen this be a problem is where a site simply has a ton of keywords and links that server no other purpose but to push a higher ranking. Search engines are smarter about identifying that, these days and pushing those out of or lower in the rankings.

10% popularity Vote Up Vote Down


Report

 query : Re: Is it illegal to track website visitors' ip addresses? If so, what precautions can you take to make sure you're covered? Essentially what I'm wanting to know is, if I set up my website to

@Shakeerah822

It's not illegal. Often it's encouraged for security purposes to know if an IP keeps spamming visits to your site, so that the server can better determine if it's a possible DDOS (Distributed Denial of Service) attack. You also might use that IP address for login/logout purposes. This information is often used for website analytics purposes, such as what you're doing here.

10% popularity Vote Up Vote Down


Report

 query : Re: How to deal with users passwords on website re-design? I have an old Drupal 6 website with about 1000 active users and I want to replace it with new OpenCart 1.5 installation. How to find

@Shakeerah822

My suggestion is not to do this. I would tell the users that you're making a change/upgrade/switch over to a new system. Tell them they'll be prompted for a new password the next time they login and have them set it up. It's a good opportunity to force everyone to change their password (better security practice). You don't have to tell them they can use their old password, so some will try this anyway and it should let them. At least some of your users, if not most, will change their passwords. If any of them have ever had their passwords stolen from another source, they'll be less exposed on your site.

I realize this isn't directly what you're after, but I encourage you not to go the down the path you're taking. Plus, it's simply easier, at risk of being slightly inconvenient for your users. At least they'll know what's happening to the degree they need to. It may even provide win you some cool points with your users for looking out for their best interest. If you tell them that you are, then they'll be less likely to get upset about it.

10% popularity Vote Up Vote Down


Report

 query : Re: Can a user or a crawler see the source of a page that has been redirected via a 301? Is it possible for a user or a web-crawler to see the contents/source code of a webdocument that is

@Shakeerah822

It's theoretically possible, but no current and common web crawlers and browsers take advantage of this, and neither do most servers.

A 301 response does have a HTTP body (i.e. a document), but it's only ever used by clients that don't support or ignore redirects. Browsers and search engine crawlers do support redirects and will completely ignore the body sent by the server.

Using a telnet client, you can see the raw response from the server:

$ telnet google.com 80
Trying 216.58.211.142...
Connected to google.com.
Escape character is '^]'.
GET / HTTP/1.1
Host: google.com

HTTP/1.1 302 Found
Cache-Control: private
Content-Type: text/html; charset=UTF-8
Referrer-Policy: no-referrer
Location: www.google.fi/?gfe_rd=cr&ei=19l5WYzCMMqq8wfX84ngBA Content-Length: 258
Date: Thu, 27 Jul 2017 12:17:27 GMT

<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>302 Moved</TITLE></HEAD><BODY>
<H1>302 Moved</H1>
The document has moved
<A HREF="http://www.google.fi/?gfe_rd=cr&amp;ei=19l5WYzCMMqq8wfX84ngBA">here</A>.
</BODY></HTML>


Another "issue" is that the server usually sends a completely different document than the original page. E.g. the above is not the homepage of google.com/ but a page that is automatically generated by the server.

Theoretically, you could make your server send the old page in the redirects and build a web-crawler that would look at the HTTP body in redirects. But why would you?, it seems pointless.

10% popularity Vote Up Vote Down


Report

 query : What's better for search engines? Doesn't matter. What does matter is: What's better for users? A modern search engine shouldn't rely on strange rules of what is a word separator and what

@Shakeerah822

What's better for search engines?
Doesn't matter. What does matter is: What's better for users?


A modern search engine shouldn't rely on strange rules of what is a word separator and what is a word joiner or whatever. Modern search engines can deal with typos, apostrophes, punctuation, etc. Figuring out which characters separate words shouldn't be much of an issue.
I disagree with the people who claim that hyphens are better because they are natural word delimiters. They kind of are, but kind of aren't: they make words semi-attached or semi-delimited. The natural word delimiter is %20.


But the above statements are irrelevant. The URL shouldn't be important anyway.

How important are the keywords in the URL?
They shouldn't be important, if they are there's obviously no content on the page.


URLs aren't very visible for humans: links may have anchor text instead, it's not shown on the page and it's not shown on the browser tabs.
The <title/> and the main heading are more visible and usually contain the same keywords anyway making the keywords in the URL redundant.


How important is it for humans?
It depends.

From a search engine point of view: not at all, the user only needs to enter a search query and click on the snippet with an interesting title and description.

But visitors come from other places too. In some cases, there's a nice anchor text instead making the "URL quality" irrelevant, but there are cases when it does matter.


Quick&dirty copy-pasting/sharing of the URL: no issue for the writer, but it does matter for the reader.
Needing to enter the URL manually. (Can't copy text from an image for instance.)


What determines the quality of a URL?


Length; You don't want address bars to scroll horizontally or URL only links to linewrap. And needless to say: it also takes longer to type a longer URL.
Word delimiters; It appears that most people agree on that hyphens are better.
Clutter; Eg: Unique IDs, filename extensions, weird URL parameters. These are difficult to remember.
Strange characters and syntax; An outdated example would be tildes (http://example.com/~user/), but URL parameter syntax is a bit strange too. Any uncommon character might be difficult to type for some people.
Safe characters vs Unicode; This is a two edged sword and deserves its own answer. But briefly: browsers mangle URLs, %c3%a4 etc is a pain in the ass to type, not every keyboard can enter the unsafe characters, possibly some encoding hell, but the keywords make sense for native speakers.
Length of the text; Consider the URL to be a form of a title, don't waste words mentioning the obvious and ignore grammar.


People will type something slightly different
Your webserver should be designed to redirect recognized non-canonical URLs to their canonical version. It's up to you to decide:


http vs https
www vs no www
trailing slashes vs no trailing slashes


But your server needs to accept and correct all of them.

A 404 page with search results would be nice for the user. (Use the words from the URL as the search query.)

10% popularity Vote Up Vote Down


Report

Back to top | Use Dark Theme