NGINX : owncloud + spdy

This is my example configuration for version nginx 1.7

# Example taken form owncloud administrator manual
# http://goo.gl/63Mb9k
server {
 listen 80;
 server_name cloud.moscheni.it;
 return 301 https://$server_name$request_uri; # enforce https
}
server {
 listen 443 deferred spdy ssl;
 listen [::]:443 deferred ssl spdy ipv6only=on;
 server_name cloud.moscheni.it;
 ssl_certificate /var/www/cert/moscheni.it/ssl-unified.crt;
 ssl_certificate_key /var/www/cert/moscheni.it/moscheni_it.key;
 ssl_trusted_certificate /var/www/cert/moscheni.it/ssl-trusted.crt;
 ssl_session_cache shared:SSL:10m; ## session cache
 ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ## no valurable SSLv3
 add_header Strict-Transport-Security max-age=31536000;
 ## always use https, don't allow http
 add_header X-Frame-Options DENY;
 ## don't allow to render site in frame
 ssl_prefer_server_ciphers on;
 ## let server decide which protocol fits best
 ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
 ## secure ciphers
 ssl_session_tickets on;
 ssl_stapling on;
 ssl_stapling_verify on;
 resolver 8.8.8.8 8.8.4.4 valid=300s;
 spdy_headers_comp 1;
 # Path to the root of your installation
 root /var/www/owncloud/;
 client_max_body_size 10G; # set max upload size
 fastcgi_buffers 64 4K;
 client_body_buffer_size 2M;
 rewrite ^/caldav(.*)$ /remote.php/caldav$1 redirect;
 rewrite ^/carddav(.*)$ /remote.php/carddav$1 redirect;
 rewrite ^/webdav(.*)$ /remote.php/webdav$1 redirect;
 index index.php;
 error_page 403 /core/templates/403.php;
 error_page 404 /core/templates/404.php;
 location = /robots.txt {
 allow all;
 log_not_found off;
 access_log off;
 }
 location ~ ^/(?:\.htaccess|data|config|db_structure\.xml|README) {
 deny all;
 }
 location / {
 # The following 2 rules are only needed with webfinger
 rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
 rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;
 rewrite ^/.well-known/carddav /remote.php/carddav/ redirect;
 rewrite ^/.well-known/caldav /remote.php/caldav/ redirect;
 rewrite ^(/core/doc/[^\/]+/)$ $1/index.html;
 try_files $uri $uri/ index.php;
 }
 location ~ \.php(?:$|/) {
 fastcgi_split_path_info ^(.+\.php)(/.+)$;
 include fastcgi_params;
 fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
 fastcgi_param PATH_INFO $fastcgi_path_info;
 #fastcgi_param HTTPS on;
 fastcgi_param HTTPS $server_https;
 fastcgi_pass php-handler;
 fastcgi_read_timeout 600; # Increase this to allow larger uploads
 access_log off; # Disable logging for performance
 fastcgi_param MOD_X_ACCEL_REDIRECT_ENABLED on;
 }
 # Optional: set long EXPIRES header on static assets
 location ~* \.(?:jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {
 expires 365d;
 # Optional: Don't log access to assets
 access_log off;
 }
}
map $scheme $server_https {
 default off;
 https on;
}

share save 120 16 NGINX : owncloud + spdy

nginx + php5-fpm & php-apc + memcached

 Install Nginx 

add official repository

vi /etc/apt/source.list
# nginx
deb http://nginx.org/packages/debian/ wheezy nginx

install

aptitude install nginx nginx-extras mysql-server mysql-client memcached php5-fpm php5-gd php5-mysql php-apc php-pear php5-cli php5-common php5-curl php5-mcrypt php5-cgi php5-memcached

Enable Nginx microcache
Microcaching requires the presence of /var/cache/nginx/microcache directory which don’t exist by default. You must create it and grant the appropriate permissions to the Nginx user (in Debian 7 it’s www-data)

mkdir -p /var/cache/nginx/microcache/
chown www-data:www-data /var/cache/nginx/microcache

APC  – Alternative PHP Cache

tweak your APC settings with following or adjust as you wish

vi /etc/php5/mods-available/apc.ini

 

add this lines:
​apc.enabled=1
apc.shm_segments=1
apc.shm_size=64M
apc.ttl=7200
apc.write_lock = 1
apc.slam_defense = 0

 

Memcached

Is powerfull if use wordpress whit w3c total cache
Configuration file is located in etc folder: /etc/memcached.conf.
Probably is good to increase memory pool at least to 128MB (default is 64MB)
Change line 23 from -m 64 to -m 128

Ottimize Mysql 

just a little bit ottimization

edit /etc/my.ini
At “[mysqld]” block, add this:

# Activate query cache
query_cache_limit=2M
query_cache_size=64M
query_cache_type=1

# Max number of connections
max_connections=400

# Reduce timeouts
interactive_timeout=30
wait_timeout=30
connect_timeout=10

# secure values with reserve for web processes
# Increase number of incoming connections backlog
sysctl -w net.core.netdev_max_backlog=4096
sysctl -w net.core.somaxconn=4096
sysctl -w net.ipv4.tcp_max_syn_backlog=4096

PHP Sessions on tmpfs

Though I have been discussing this solution in the case of caches, file-based PHP sessions can be setup in a similar manner. You must first work out where session files for your PHP installation are stored. Note, that if your using PHP-FPM you may be required to modify the second configuration line.

# /etc/php.ini
session.save_path = /var/lib/php/session
# /etc/php-fpm.conf
php_value[session.save_path] = /var/lib/php/session

We can then make sure that the directory has been created, along with the fall-back permissions. So as to temporary see the performance increases, we are able to mount the ‘tmpfs’ partition to the session directory, setting ownership to the desired user.

mkdir -p /var/lib/php/session
# fallback
chown nginx:nginx /var/lib/php/session
chmod 755 /var/lib/php/session
# temporary mount
mount -t tmpfs -o size=32m,mode=0755,uid=$(id -u nginx),gid=$(id -g nginx) tmpfs /var/lib/php/session
umount /var/lib/php/session

If you are a satisfied with the configuration, you can persist the partition mount across reboots by adding the following line to your ‘fstab’ file.

echo "tmpfs /var/lib/php/session tmpfs size=32m,uid=$(id -u nginx),gid=$(id -g nginx),mode=0755 0 0" >> /etc/fstab

 

share save 120 16 nginx + php5 fpm & php apc + memcached

Speed up a new VPS

Reducing system logging activity

In a default distro install, system logging is often configured fully, suitable for a server or multi-user system. However, on a single-user system the constant writing the many system log files will result in reduced interactive system performance, and reducing logging activity will be beneficial for performance as well as the lifetime of the flash memory.

In a Debian Wheezy installation, the default system logger is rsyslog, and it configuration file is /etc/rsyslog.conf. In the rules section, the following logs are often enabled by default:

auth,authpriv.*                        /var/log/auth.log
*.*;auth,authpriv.none         -/var/log/syslog
cron.*                         /var/log/cron.log
daemon.*                       -/var/log/daemon.log
kern.*                         -/var/log/kern.log
lpr.*                          -/var/log/lpr.log
mail.*                         -/var/log/mail.log
user.*                         -/var/log/user.log

There may also be rules for -/var/log/debug and -/var/log/messages, and |/dev/xconsole.

Note that kernel messages are logged in both kern.log and syslog, in addition to being available from the dmesg command from kernel memory. In a single user system, it is possible to disable most or all of these logs by placing a ‘#’ character at the start of the corresponding lines. Logs can be re-enabled if it is necessary to debug a system problem.

Special flags for your mounts

By default, many distributions including Ubuntu use the ‘relatime’ flag for updating file metadata when file are accesed, but if you’re unlikely to care about last access time you can skip this. Indeed this will come with both improve performance and, more importantly, the longevity of your SSD by reducing unnecessary writes.

To make all these changes, open up a terminal and run:

sudo nano -w /etc/fstab

Then for all SSD devices in your system remove ‘relatime’ if present and add ‘noatime’ so it looks something like this:

/dev/sdaX   /   ext4   defaults,noatime,errors=remount-ro 0 1
/dev/sdaY   /home   ext4   defaults,noatime,errors=remount-ro 0 2

As you can see I didn’t use the nodiratime nor discard. In the first one the usage of noatime has nodiratime implicit. And in the second one I have experimented some performance drawbacks when performing operations with large number of small files.

If it’s temporary move it to RAM

Every day applications generat a lot of log files so to reduce unnecessary writes to disk move the temp directories into a ram disk using the ‘tmpfs’ filesystem, which dynamically expands and shrinks as needed.

In your /etc/fstab, add the following:

tmpfs   /tmp       tmpfs   defaults,noatime,mode=1777   0  0
tmpfs   /var/spool tmpfs   defaults,noatime,mode=1777   0  0
tmpfs   /var/tmp   tmpfs   defaults,noatime,mode=1777   0  0

If you don’t mind losing log files between boots, and unless you’re running a server you can probably live without them, also add:

tmpfs   /var/log   tmpfs   defaults,noatime,mode=0755   0  0

Optimizing cache directories

Applications like browsers and window managers that use a disk cache may conform to the XDG Base Directory Specification standard. In that case, the environment variable XDG_CACHE_HOME defines the directory where cache files are stored. By setting this variable to a ramdisk location, it is possible to greatly speed-up the performance of certain browsers that otherwise stall with heavy writing to the disk-cache in flash memory. For example, in a Debian Wheezy installation with LXDE, there may be a configuration file called /etc/alternatives/x-session-manager. By adding the line

export XDG_CACHE_HOME="/dev/shm/.cache"

to the start of this file, programs running in X conforming to the standard will be using the ramdisk in /dev/shm to store cache files. One browser popular on light-weight configurations that benefits form this is Midori. The default disk cache size in Midori is 100MB, you can lower

Better Linux Disk Caching & Performance with vm.dirty_ratio & vm.dirty_background_ratio

File caching is an important performance improvement, and read caching is a clear win in most cases, balanced against applications using the RAM directly. Write caching is trickier. The Linux kernel stages disk writes into cache, and over time asynchronously flushes them to disk. This has a nice effect of speeding disk I/O but it is risky. When data isn’t written to disk there is an increased chance of losing it.

There is also the chance that a lot of I/O will overwhelm the cache, too. Ever written a lot of data to disk all at once, and seen large pauses on the system while it tries to deal with all that data? Those pauses are a result of the cache deciding that there’s too much data to be written asynchronously (as a non-blocking background operation, letting the application process continue), and switches to writing synchronously (blocking and making the process wait until the I/O is committed to disk). Of course, a filesystem also has to preserve write order, so when it starts writing synchronously it first has to destage the cache. Hence the long pause.

The nice thing is that these are controllable options, and based on your workloads & data you can decide how you want to set them up. Let’s take a look:

$ sysctl -a | grep dirty
 vm.dirty_background_ratio = 10
 vm.dirty_background_bytes = 0
 vm.dirty_ratio = 20
 vm.dirty_bytes = 0
 vm.dirty_writeback_centisecs = 500
 vm.dirty_expire_centisecs = 3000

vm.dirty_background_ratio is the percentage of system memory that can be filled with “dirty” pages — memory pages that still need to be written to disk — before the pdflush/flush/kdmflush background processes kick in to write it to disk. My example is 10%, so if my virtual server has 32 GB of memory that’s 3.2 GB of data that can be sitting in RAM before something is done.

vm.dirty_ratio is the absolute maximum amount of system memory that can be filled with dirty pages before everything must get committed to disk. When the system gets to this point all new I/O blocks until dirty pages have been written to disk. This is often the source of long I/O pauses, but is a safeguard against too much data being cached unsafely in memory.

vm.dirty_background_bytes and vm.dirty_bytes are another way to specify these parameters. If you set the _bytes version the _ratio version will become 0, and vice-versa.

vm.dirty_expire_centisecs is how long something can be in cache before it needs to be written. In this case it’s 30 seconds. When the pdflush/flush/kdmflush processes kick in they will check to see how old a dirty page is, and if it’s older than this value it’ll be written asynchronously to disk. Since holding a dirty page in memory is unsafe this is also a safeguard against data loss.

vm.dirty_writeback_centisecs is how often the pdflush/flush/kdmflush processes wake up and check to see if work needs to be done.

There are also scenarios where a system has to deal with infrequent, bursty traffic to slow disk (batch jobs at the top of the hour, midnight, writing to an SD card on a Raspberry Pi, etc.). In that case an approach might be to allow all that write I/O to be deposited in the cache so that the background flush operations can deal with it asynchronously over time:

vm.dirty_background_ratio = 5
vm.dirty_ratio = 80

Here the background processes will start writing right away when it hits that 5% ceiling but the system won’t force synchronous I/O until it gets to 80% full. From there you just size your system RAM and vm.dirty_ratio to be able to consume all the written data. Again, there are tradeoffs with data consistency on disk, which translates into risk to data. Buy a UPS and make sure you can destage cache before the UPS runs out of power. icon smile Speed up a new  VPS

No matter the route you choose you should always be gathering hard data to support your changes and help you determine if you are improving things or making them worse. In this case you can get data from many different places, including the application itself, /proc/vmstat, /proc/meminfo, iostat, vmstat, and many of the things in /proc/sys/vm.

Check if swap is enabled on your VPS, if not create it

The “free” command shows your system’s available physical and virtual memory.

If you have virtual memory enabled already, you can skip ahead to “A Note About Swap Partitions” and then the configuration section. When enabled, the output will look like this:

bash-root@moscheni.it:/# free
             total       used       free     shared    buffers     cached
Mem:        361996     360392       1604          0       1988      54376
-/+ buffers/cache:     304028      57968
Swap:       249896          0     249896
bash-root@moscheni.it:/# _

If it is not enabled, the output will look like this:

bash-root@moscheni.it:/# free
             total       used       free     shared    buffers     cached
Mem:        361996     360392       1604          0       2320      54444
-/+ buffers/cache:     303628      58368
Swap:            0          0          0
bash-root@moscheni.it:/# _

You can also narrow down the output with free | grep Swap. This will only show theSwap: line, total, used and free VM. (Remember, by default, grep is case sensitive!)

bash-root@moscheni.it:/# free | grep Swap
Swap:       249896          0     249896
bash-root@moscheni.it:/# _

 

Virtual memory allows your system (and thus your apps) additional virtual RAM beyond what your system physically has – or in the case of droplets, what is allocated. It does this by using your disk for the extra, ‘virtual’ memory and swaps data in and out of system memory and virtual memory as it’s needed.

to create and use it

root@moscheni:~# dd if=/dev/zero of=/var/swap.img bs=1024k count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 7.45802 s, 281 MB/s
root@moscheni:~# mkswap /var/swap.img 
Setting up swapspace version 1, size = 2047996 KiB
no label, UUID=5331c54f-d407-40c1-9eb9-a019cacb6ee8
root@moscheni:~# swapo
swapoff swapon 
root@moscheni:~# swapon /var/swap.img 
root@moscheni:~# free -m 
 total used free shared buffers cached
Mem: 5988 2126 3861 0 5 2017
-/+ buffers/cache: 103 5884
Swap: 1999 0 1999
root@moscheni:~# 
 _

I preferer from 512MB to 2GB

 

 

 

 

share save 120 16 Speed up a new  VPS

SonicWall NetExtender Client Linux

How to install  Dell SonicWall NetExtender Client on linux 64/32 bit

1) dowload lates version on :
https://sslvpn.demo.sonicwall.com/cgi-bin/portal , login demo password as ”  password  ”
click NetExtender

2)
# tar xzf NetExtender.Linux.7.0.749.x86_64.tgz
# cd netExtenderClient/
# ../install
— Dell SonicWALL NetExtender 7.0.749 Installer —
Checking library dependencies…
Checking pppd…
Do you want non-root users to be able to run NetExtender?
If so, I can set pppd to run as root, but this could be
considered a security risk.
Set pppd to run as root [y/N]? y
il modo di “/usr/sbin/pppd” è stato mantenuto pari a 4754 (rwsr-xr–)
il modo di “/usr/sbin/pppd” è stato cambiato da 4754 (rwsr-xr–) in 4755 (rwsr-xr-x)
il modo di “/etc/ppp” è stato mantenuto pari a 0755 (rwxr-xr-x)
il modo di “/etc/ppp/peers” è stato cambiato da 2750 (rwxr-s—) in 2754 (rwxr-sr–)
il modo di “/etc/ppp/peers/provider” è stato cambiato da 0640 (rw-r—–) in 0644 (rw-r–r–)
il modo di “/etc/ppp/peers” è stato cambiato da 2754 (rwxr-sr–) in 2755 (rwxr-sr-x)
Copying files…
Compatibility mode: SUSE/Ubuntu

———————— INSTALLATION SUCCESSFUL ———————–

To launch NetExtender, do one of the following:

1. Click the NetExtender icon under the Applications menu
(look under the ‘Internet’ or ‘Network’ category)
or
2. Type ‘netExtenderGui’

# netExtenderGui
2013-05-18 10:26:39 CEST INFO com.sonicwall.NetExtender Logging initialized.
2013-05-18 10:26:40 CEST INFO com.sonicwall.NetExtender NetExtender version 7.0.749
Making a global reference ot the NetExtenderControl object registered with JNI
Compatibility mode: SUSE/Ubuntu
NetExtender for Linux – Version 7.0.749
Dell SonicWALL
Copyright (c) 2013 Dell

2013-05-18 10:26:40 CEST INFO com.sonicwall.gui.PreferencesDialog createLogPanel()

 

 

share save 120 16 SonicWall NetExtender Client Linux

Get IP address of SSH remote user

To get the remote IP can

—————————————————————————————
IP_SORG=`last |head -1 |awk ‘{ print $3 }’`
echo ” Sei collageto dall’indirizzo: $IP_SORG”
—————————————————————————————
other way
—————————————————————————————
HOST=`who am i | sed -r “s/.*\((.*)\).*/\\1/”`
IP=`host $HOST | sed -r “s/.* has address (.*)/\\1/”`
——————————————————————————————
other way
——————————————————————————————-
echo $SSH_CLIENT | cut -d ‘ ‘ -f 1
oppure
echo $SSH_CONNECTION | cut -d ‘ ‘ -f 1
——————————————————————-

share save 120 16 Get IP address of SSH remote user

Installare Debian GNU/kFreeBSD con supporto ZFS

Cos’è Debian GNU/kFreeBSD ?
Debian GNU/kFreeBSD è un sistema operativo per Debian, che usa il kernel di FreeBSD invece del kernel Linux (da qui il nome). Verrà il giorno in cui la maggior parte delle applicazioni esisterà sia in Debian GNU/kFreeBSD sia in Debian GNU/Linux.

Cos’è ZFS?
ZFS è un nuovo tipo di file system che fornisce una semplice amministrazione, una approccio transazionale, una integrità del dato “end-to-end” e una immensa scalabilità (128-bit). ZFS non è un miglioramento “incrementale” alla tecnologia esistente, ma è un nuovo approccio alla gestione dei dati creato eliminando alcune assunzioni di base che risalevano a 20 anni fa.

Come si installa?

1) download della iso solo con supporto amd64 di debian kfreebsd per esempio cosi
curl -C – -L -O http://ftp.nl.debian.org/debian/dists/squeeze/main/installer-kfreebsd-amd64/current/images/netboot/mini.iso

2) far partire l’installazione di defualt (si sono pigro)
Selezione 0021 300x171 Installare Debian GNU/kFreeBSD con supporto ZFS
3) seguire i passi per lingua , mirror , configurazione di rete , user e password, nel virtual machine allegata ( user root password ferzip , e utente user con password ferzip )
Selezione 0031 300x171 Installare Debian GNU/kFreeBSD con supporto ZFS

4) attenzione in macchine virtuali durante l’installazione potrebbe presentarsi un errore di di svuotamento cache del disco
Selezione 0041 300x172 Installare Debian GNU/kFreeBSD con supporto ZFS

5) procedere poi con la formattazione del disco , attenzione se si usa la procedura guidata personalizzare la partizione selezionando il filesistem ZFS come da slide qua sotto

6) procedere con la normale installazione e quando richiesto selezione l’evemenent prescelto
Selezione 014 300x171 Installare Debian GNU/kFreeBSD con supporto ZFS

7) continuare con la procedura guidata fino alle fine
Selezione 016 300x171 Installare Debian GNU/kFreeBSD con supporto ZFS

8) TATA ecco il nuovo sistema installato
user@ferzip-freeBSD-zfs:~$ uname -a
GNU/kFreeBSD ferzip-freeBSD-zfs 8.1-1-amd64 #0 Wed Oct 19 14:57:54 CEST 2011 x86_64 amd64 Intel(R) Core(TM) i3 CPU M 350 @ 2.27GHz GNU/kFreeBSD
user@ferzip-freeBSD-zfs:~$ mount
ferzip-freeBSD-zfs-ad0s1 on / (zfs, local)
devfs on /dev (devfs, local, multilabel)
linprocfs on /proc (linprocfs, local)
linsysfs on /sys (linsysfs, local)
fdescfs on /dev/fd (fdescfs)
tmpfs on /lib/init/rw (tmpfs, local, nosuid)

share save 120 16 Installare Debian GNU/kFreeBSD con supporto ZFS