Mobile app version of vmapp.org
Login or Join
Sarah324

: How to find out what is causing "Entry Process" limit? We're using an OpenCart 1.5.5.1 custom theme for our site. Well it hasn't been smooth sailing for us and we are facing problem after

@Sarah324

Posted in: #Mysql #Php #Phpmyadmin

We're using an OpenCart 1.5.5.1 custom theme for our site.

Well it hasn't been smooth sailing for us and we are facing problem after problem. So far the gist of it is we're currently on a business SSD host (still shared I believe) to test the performance of the site (uniqpcs.co.uk), it is on a trial period so that we can load test the site (using load impact). Every single instance, when running the test it seems that the entry process limit is reached, causing the site to become unresponsive and eventually crashing (the entry process limit is set to 20 on our cPanel - even tried 40 but still same issue).

Been in contact with host provider (NameCheap) back and forth through email and live chat without any real help unfortunately but they mentioned a few things to me that I hope you can help clarify:

From what I understand the Entry Process limit is reached when a process (most likely PHP scripts) hangs much longer than expected and the delay causes more and more process to pile up hence the site becoming unresponsive and eventually reaching its limit.

From their MySQL error log they determined that the database is 'apparently' not optimized very well with 126 tables and a total of 26791 rows and said there is a SELECT query in MySQL which processes too long because of a large amount of rows:

Please find MySQL server logs below:

# Time: 140501 8:37:23
# User@Host: auniqpcs_uniqpc[auniqpcs_uniqpc] @ localhost []
# Query_time: 12.059221 Lock_time: 0.004748 Rows_sent: 1 Rows_examined: 26731
SET timestamp=1398947843;
SELECT COUNT(DISTINCT p.product_id) AS total FROM category_path cp LEFT JOIN product_to_category p2c ON (cp.category_id = p2c.category_id) LEFT JOIN product p ON (p2c.product_id = p.product_id) LEFT JOIN product_description pd ON (p.product_id = pd.product_id) LEFT JOIN product_to_store p2s ON (p.product_id = p2s.product_id) WHERE pd.language_id = '1' AND p.status = '1' AND p.date_available <= NOW() AND p2s.store_id = '0' AND cp.path_id = '211';

# User@Host: auniqpcs_uniqpc[auniqpcs_uniqpc] @ localhost []
# Query_time: 12.081561 Lock_time: 0.004338 Rows_sent: 1 Rows_examined: 26746
SET timestamp=1398947843;
SELECT COUNT(DISTINCT p.product_id) AS total FROM category_path cp LEFT JOIN product_to_category p2c ON (cp.category_id = p2c.category_id) LEFT JOIN product p ON (p2c.product_id = p.product_id) LEFT JOIN product_description pd ON (p.product_id = pd.product_id) LEFT JOIN product_to_store p2s ON (p.product_id = p2s.product_id) WHERE pd.language_id = '1' AND p.status = '1' AND p.date_available <= NOW() AND p2s.store_id = '0' AND cp.path_id = '258';
# Time: 140501 8:37:26

# User@Host: auniqpcs_uniqpc[auniqpcs_uniqpc] @ localhost []
# Query_time: 13.821338 Lock_time: 0.005288 Rows_sent: 1 Rows_examined: 26746
SET timestamp=1398947846;
SELECT COUNT(DISTINCT p.product_id) AS total FROM category_path cp LEFT JOIN product_to_category p2c ON (cp.category_id = p2c.category_id) LEFT JOIN product p ON (p2c.product_id = p.product_id) LEFT JOIN product_description pd ON (p.product_id = pd.product_id) LEFT JOIN product_to_store p2s ON (p.product_id = p2s.product_id) WHERE pd.language_id = '1' AND p.status = '1' AND p.date_available <= NOW() AND p2s.store_id = '0' AND cp.path_id = '245';
# Time: 140501 8:37:32

# User@Host: auniqpcs_uniqpc[auniqpcs_uniqpc] @ localhost []
# Query_time: 16.426658 Lock_time: 0.010242 Rows_sent: 1 Rows_examined: 26746
SET timestamp=1398947852;
SELECT COUNT(DISTINCT p.product_id) AS total FROM category_path cp LEFT JOIN product_to_category p2c ON (cp.category_id = p2c.category_id) LEFT JOIN product p ON (p2c.product_id = p.product_id) LEFT JOIN product_description pd ON (p.product_id = pd.product_id) LEFT JOIN product_to_store p2s ON (p.product_id = p2s.product_id) WHERE pd.language_id = '1' AND p.status = '1' AND p.date_available <= NOW() AND p2s.store_id = '0' AND cp.path_id = '245';


While processing, load tool sends another PHP request which "passes" the request to MySQL (while the 1st SQL query is still being processed). It is also possible that PHP requests that are referred to MySQL are badly coded.

pingdom: tools.pingdom.com/fpt/#!/bzzDid/uniqpcs.co.uk
I would really like to know what is the exact cause of this problem, is it badly coded PHP script, is it the MySQL database queries, or is it the host?

Now I'm not too sure what to do.... any advice is appreciated.

10.03% popularity Vote Up Vote Down


Login to follow query

More posts by @Sarah324

3 Comments

Sorted by latest first Latest Oldest Best

 

@LarsenBagley505

According to my web browser, the one HTML page alone is about 35KB in HTML alone which is about 15KB higher than it should be. Also, the number of elements loaded was 458 which required over 6.4MB of data. A poor soul out there that may still use a 56K dial-up connection at max speed would need to wait at least 19 minutes just for the main page to load in its entirety.

One thing that can help greatly is to consolidate your images and scripts.

Look into CSS sprites. It's an easy way to make images load faster and has the benefit of having your server be hit less often thereby making it faster overall.

Merge all your CSS styles together and remove unnecessary spaces and for more speed, remove comments. Try to have only ONE CSS file per page, not 20+.

Do the same with Javascript files. Consolidate them into one.

When I loaded your site, I noticed a background image that loaded before everything else loaded. I don't see a real benefit to it, and removing it would help make the site faster.

I also noticed a bad header output when I went to look at the headers:

HTTP/1.1 200 OK
Date: Mon, 02 Feb 2015 01:39:12 GMT
Server: Apache/2.2.27 (Unix) mod_ssl/2.2.27 OpenSSL/1.0.1e-fips mod_bwlimited/1.4
X-Powered-By: PHP/5.3.28
Pragma: no-cache
Nitro-Cache: Enabled
Cache-Control: public, max-age=31536000
Expires: Wed, 04 Mar 2015 01:39:12 GMT
Set-Cookie: PHPSESSID=c37c0a5a0d115149b494fae5cf5b1d78; path=/
Last-Modified: Sun, 01 Feb 2015 23:41:06 GMT
Vary: Accept-Encoding,User-Agent
Cache-Control: private, no-cache, no-store, proxy-revalidate, no-transform
Pragma: no-cache
Content-Type: text/html; charset=utf-8


With such header, browsers will likely re-request the page even if the freshest copy is stored locally. To fix that, these lines need to be removed:

Cache-Control: private, no-cache, no-store, proxy-revalidate, no-transform
Pragma: no-cache


I used the command line tool CURL to check headers.

10% popularity Vote Up Vote Down


 

@Miguel251

"From what I understand....."

Not really. Certainly more requests means more process and longer requests means more concurrent processes. But that should not be your your starting point for managing capacity on a server unless your business model is to make money by selling processing capacity.

Determing exactly how many concurrent requests your stack can handle is a complex problem and one which is going to require much deeper access into your system than is exposed in cpanel. You've not provided any details of the spec of the hardware, but I would expect something comparable to a low spec desktop (2 cores / 2Gb) to be able to handle around 70-130 requests concurrently but this can vary greatly depending on the code.

Your biggest problem currently is your DBMS - your dataset is tiny but it is taking a horrendous amount of time for your queries to run. Your database is so small that your probably gettng no siginificant benefit from using an ssd - your data should all fit in RAM. But these query times are really dreadful. Unfortunately there's no quick fix for that. You need to know how to analyze and change the schema.

I would definitely recommend you setting up your own dedicated machine for testing. While you should have sufficient access to identify the cause of most of the database issues without needing full admin access, this will be required for some of the fixes. And you'll need root access to get to the bottom of problems in the PHP and webserver.

You have a lot of learning ahead of you. Recommending specific products is frowned upon here so I will suggest you Google for books on Linux, Apache, MySQL and PHP Performance tuning.

update

I just had a look at your site. The front page is not showing symptoms of DBMS problems - the log entries you quotd above may have been anomolies. However there are some horrendous front end rendering issues. Not least the fact that the front page has over 400 images.

10% popularity Vote Up Vote Down


 

@Sent6035632

According to this:
docs.cloudlinux.com/entry_processes_limit.html
Entry process limits are designed to prevent hackers from slowing down the server.

What you should do is take a closer look at your processing time.

Run a test on webpagetest.org and check the values for "time to first byte". If it's over 1/5th of a second then its a processing issue and the first things that should be examined are What components the webpage itself needs AND What part of the database you need data from and when.

So if you are running a webpage that loads an external stylesheet, an external javascript file and a bunch of external images (especially large high-quality images) which have URL's that point to the same server as the webpage itself, then more slowdowns can be anticipated as the number of requests per user can multiply rather quickly.

Also, check the database. If the code that is always executed consists of seeking something complex with the biggest table such as getting a live count of rows, then expect an extra 200ms wait time per item request. Try to fix this by storing frequently requested data in a separate table and reading from it.

Example:

Say theres a table called CFG that stores random configuration data where the field names are "name" and "value" and a table called ABC with over 1,000,000 records in it and you don't know exactly how many there are.

Naturally one could execute:

Select count(*) as Total from ABC;


To get the number of items and have the result stored in a temporary new column named Total. I just did this on a table now with over 900,000 records on a pentium 4 and it took me about 4 seconds to receive a count.

What you want to do instead in this example is add a new configuration value that can be constantly looked up, like so:

Insert Into CFG(Name,Value) Values("tabletotal","0");
Update CFG set Value=(Select count(*) as Total from ABC) where CFG.Name="tabletotal";


Then to verify the number has been added, look up your entire configuration table with this statement:

Select * from CFG;


Then in your script, if you used an example like this, change:

Select count(*) as Total from ABC;


To:

Select Value from CFG where Name="tabletotal";


Then you'll save a huge chunk of processing time and then the odds of the entry processing limit being reached will be considerably lower.

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme