HackerOne users: Testing against this community violates our program's Terms of Service and will result in your bounty being denied.

Vanilla Performance Discussion

13

Comments

  • @businessdad said:
    The curious thing is that, when I tried a siege test (50 simultaneous users), the reponse times were progressively longer with each request (2 seconds, 2.5, 2.8, 3.4, 4, 4.5, 5.2, up to 9 seconds

    Oh man, that is so interesting. Sorry to say that you have these problems, but would be nice to pinpoint where the problem is and how to resolve it.

    Which actions did you do with that test with 50 simultaneous users? Time to turn on a real timer / logger?

    There was an error rendering this rich post.

  • businessdadbusinessdad Stealth contributor MVP

    @UnderDog said:
    Which actions did you do with that test with 50 simultaneous users? Time to turn on a real timer / logger?

    I just ran a brutal siege test. I ran it again a couple of minutes ago, with 30 concurrent connections, here are the results: pastebin.com/GuMCAe7H.

    As you can see, the response time grows constantly, then it decreases. It looks like prior requests are slowing down the new ones, then, when they are completed, response times get better. Some sort of overlapping, I would say.

    For reference, the server is a basic VPS with 1 GB of RAM, and only APC has been enabled (no memcached, Varnish, etc). Vanilla is installed with very few plugins and a custom theme, and there are no users currently hitting the server. Database is practically empty (ten categories and a dozen of posts, no activity).

  • LincLinc Detroit Admin

    @businessdad This could be how you have Apache and MySQL set up, not necessarily anything to do with Vanilla. For instance, how many MySQL connections do you allow at a time? My guess after viewing that file is the answer is "10".

  • businessdadbusinessdad Stealth contributor MVP
    edited December 2013

    @Lincoln said:
    businessdad This could be how you have Apache and MySQL set up, not necessarily anything to do with Vanilla. For instance, how many MySQL connections do you allow at a time? My guess after viewing that file is the answer is "10".

    I didn't configure the server, I'm trying to figure out what parameters could be incorrect. I also suspect a bottleneck somewhere, I reckon that Vanilla makes a heavier usage of the database than other applications, and that's what causes the cumulative delays. I'm far from being finished with my investigation. :)

    Edit
    I forgot to answer your question. MySQL is set to accept a maximum of 75 connections.

  • LincLinc Detroit Admin

    How many concurrent users does Apache allow?

  • businessdadbusinessdad Stealth contributor MVP

    @Lincoln said:
    How many concurrent users does Apache allow?

    It's Nginx, not Apache, and it's set to 50 (hence the maximum limit of 50 using siege).

  • peregrineperegrine MVP
    edited April 2014

    @businessdad

    I just experimented with siege on my local host seeing how many connections my localhost could deal with :) and opened a vmstat in one window and ps in another.
    pretty cool.

    because I was trying to see if I could get a clue on another server (some friends I try to help).

    anyway the vmstat on that was passed on to me was....

    procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
      r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
     25  3 1392736   1224      0   1620    3    5     7     3    0    3  9  1 90  0  0
    

    it doesn't look good does it. I see the result. lots of swapping and some page-outs, little free memory.
    but I am wondering what is eating this up. Assuming my interpretation is correct.

    the process list passed on to me also had 100 apache httpd -k restart

    not sure why restart vs start. so that indicates probably 100 or so concurrent users, it seems to me.
    So, is the number of concurrent apache users killing things or is the the SQL connections with additional stats here:

    http://vanillaforums.org/discussion/26612/anybody-have-a-forum-few-quick-questions

    any more thoughts?

    and is the chicken or the egg, do sql connections slow things down to make apache processes stay longer or what. resulting a big snowball rolling down hill getting bigger and bigger.

    I may not provide the completed solution you might desire, but I do try to provide honest suggestions to help you solve your issue.

  • businessdadbusinessdad Stealth contributor MVP

    Now that you mention it, we don't use Apache. We run our site on Nginx, following the advice we read on the net and it seems to be faster than Apache (although I'm too rusty as a Linux SysAdmin to back this up with numbers and statistics).

    Regarding the swap, you should try to figure out what exactly is causing it. MySQL, and all RDBMS, are notoriously memory hungry, therefore they could cause issues if they run on the same server as Apache. From a RDBMS perspective, you could review MySQL configuration to ensure that it's not set to take too much memory. Additionally, you could try replacing Apache with Nginx and see if that works better.

    Performance issues are always a RPITA to fix, as they could be due to almost anything and everything. They were the most dreaded when my team received support calls from customers. :D

  • Regarding the swap, you should try to figure out what exactly is causing it.

    :) yes that is what I am trying to figure out. :):):)

    yes, I 've suggested nginx.

    RPITA - never heard that acronym - but it was readily apparent :).

    any suggestions for nginx configuration and rewrite rules.

    I may not provide the completed solution you might desire, but I do try to provide honest suggestions to help you solve your issue.

  • chanhchanh OngETC.com - CMS Researcher ✭✭
    edited April 2014

    RPITA: Royal Pain in the *ss

    I never even consider nginx but now it makes me feel like I should be instead of Apache.

  • R_JR_J Ex-Fanboy Munich Admin

    @peregrine said:

    any suggestions for nginx configuration and rewrite rules.

    I'm using that configuration:

    server {
       server_name  v21b3.dev;
       listen       80;
       root         /var/www/vanilla21b3;
       index        index.php index.html index.htm;
    
       client_max_body_size     20m;
       client_body_buffer_size  128k;
    
       auth_basic "Restricted";
       auth_basic_user_file /var/www/.htpasswd;
    
       location / {
          try_files $uri $uri/ /index.php?p=$uri&$args;
       }
    
       location ~ \.php$ {
          fastcgi_pass   unix:/var/run/php5-fpm.sock;
          fastcgi_index  index.php;
          include        fastcgi_params;
       }
    
       location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
          access_log     off;
          log_not_found  off;
          expires        8d;
       }
    
       location ~* ^/uploads/.*.(html|htm|shtml|php)$ {
          types { }
          default_type text/plain;
       }
    
       location ~ /\. { access_log off; log_not_found off; deny all; }
       location ~ ~$ { access_log off; log_not_found off; deny all; }
       location = /robots.txt { access_log off; log_not_found off; }
       location ^~ favicon { access_log off; log_not_found off; }
       location ^~ /conf/ { internal; }
    }
    

    "Rewrite Rules" in nginx seems to be really fun. You'll see them in "try_files"

  • thx.for the info and rules appreciate it.

    I may not provide the completed solution you might desire, but I do try to provide honest suggestions to help you solve your issue.

  • chanhchanh OngETC.com - CMS Researcher ✭✭
    edited April 2014

    I am curious if moving the cache to a super fast drive would help in performance by making these change temporary.

    in index.php

    define('PATH_ROOT2', 'G:\ssddrive\cache');

    in conf\constants.php

    define('PATH_CACHE', PATH_ROOT2.'/cache');

    My theory is if Vanilla is doing lot of IO on slower drive which might cause the CPU to spike due to IO delay that causes by the cache.

    Would you willing to give this a try?

    Thanks

  • I am curious if moving the cache to a super fast drive would help in performance by making these change temporary.

    this is hosted on a linux machine. vps 1gb ram allocated 4 cpu core.
    any hardware configuration changes are out of my control,

    I may not provide the completed solution you might desire, but I do try to provide honest suggestions to help you solve your issue.

  • NGINX > Apache, period. But you should be able to get it going fast on apache with a proper configuration.

    If you are starting to swap, it mostly likely has to do with MYSQL eating up all the memory.

    Can you take a screenshot of 'htop' during a loadtest and post it?

    You should see MYSQL at the top doing the most work, followed by your apache/php processes.

    MYSQL does not have a setting to specify max memory usage. Max memory usage is based on a semi-complicated formula based on several parameters in your my.ini config file for mysql. SEE: http://www.mysqlcalculator.com/

    Find a copy of the mysqltuner.pl script and run it.
    Its output includes a line that looks like this:

    Maximum possible memory usage: 6.2G (80% of installed RAM)

    I give mysql 80% of my ram and keep the DB files on an SSD drive.

    The script will calculate your max possible usage for you. Be sure it doesnt come close to, or exceed your physical memory. This is very typical in a lot of mysql setups.
    Do a search on which parameters control this, and tune them down so MYSQL doesnt eat all the memory.

    Thats the first thing I would do.

    After that, I would limit the total number of allowed forked processes in apache/php.

    NGINX doesnt have this issue, it doesnt fork per request so its memory usage doesnt really grow under load.

  • thanks @jackjitsu will look into it.

    I may not provide the completed solution you might desire, but I do try to provide honest suggestions to help you solve your issue.

  • rotaechorotaecho Los Angeles New

    Howdy! New Vanilla Forums SA here (I have 20yrs of UNIX/Linux web-infrastructure administration though). I'm developing a community forum of which I expect 3000 immediate members interested and later growth to 5000 I expect within the next 3-5yrs from now. This is a home grown solution web-site idea I have (semi new to the web-development side of the house been more dev-ops as they call it now-a-days for the last decade or so) using nginx and vanilla forums. I figured the experts here could give me a realistic hardware (resource) case-scenario and possible suggestions/advice/words-of-wisedom for the nginx setup. I know with Apache you can always determine the maxclients via the ram output formula with Apache. I'd be interested in any such tidbits for the Vanilla Forums software.

    Thanks!

    -Will

  • rotaechorotaecho Los Angeles New

    @jackjitsu been awhile for that configuration for nginx, could you post it? Thanks!

  • philcophilco New
    edited May 2016

    @x00 said:
    APC you just turn on in php, you can use memecache. I recommend clearing activity periodically (this is something fixed in 2.1). Apache is slow, Vanilla performs better with nginx.

    How do you clear the activity @x00 ? We have hundreds of thousands of records in the activity table. Do you delete from database directly? or?

  • vrijvlindervrijvlinder Papillon-Sauvage MVP

    Wow this post is so old..it qualifies as a Necro-Post

    @philco said:
    How do you clear the activity @x00 ? We have hundreds of thousands of records in the activity table. Do you delete from database directly? or?

    Have you tried the ActivityPurge plugin ????

Sign In or Register to comment.