Alex Pierrot

Personal Site

Creating cgroups in RHEL/CentOS 7

Intro

Red Hat provide a clear explanation of what control groups (cgroups) are – https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/resource_management_guide/chap-introduction_to_control_groups

TL/DR:

cgroups allow for system resources to be limited for certain user’s processes, which are defined in configuration files. This is useful e.g. if you wish to limit a compiler’s maximum memory usage and avoid it grinding the system to a halt.

Prerequisites

libcgroup-tools need to be installed
sudo yum list installed libcgroup-tools

If not installed, run

sudo yum install libcgroup-tools -y

Creating cgroups

In RHEL 7, you can list the resource controllers which are mounted by default using
lssubsys
If you are configuring any of the listed controllers, you do not need to mount them in your configuration file.

The default syntax of /etc/cgconfig.conf, the default control group configuration file, is:

/etc/cgconfig.conf

group <groupname> {
        [permissions] #optional
        <controller> {
                <param name> = <param value>
                ...               
        }
...
}

If this is your first time configuring control groups on a system, configure the service to start at boot:

sudo systemctl enable cgconfig

Start the service:

sudo systemctl start cgconfig

Once you have created control groups, you need to modify /etc/cgrules.conf, which assigns user processes to control groups:
NB: the <process> parameter is optional

<user>:<process>    <controllers> <controlgroup>

If this is the first time control groups are created on a particular system, configure the service to start at boot:

sudo systemctl enable cgred

Start the service:

sudo systemctl start cgred

Example: Limiting PyCharm memory usage to 40% of total RAM & Swap

Work out 40% of total system memory in kB:
awk '/MemTotal/{printf "%d\n", $2 * 0.4}' < /proc/meminfo
3931969

Work out 40% of total system swap in kB:

awk '/SwapTotal/{printf "%d\n", $2 * 0.4}' < /proc/meminfo
2018507  

Add the two together: 3931969 + 2018507 = 5950476

Create the control group and set memory limits:

/etc/cgconfig.conf

group pycharm {
        memory {
                memory.limit_in_bytes = 3931969k;
                memory.memsw.limit_in_bytes = 5950476k;
        }
}

Start the service:

sudo systemctl start cgconfig

This will create the pycharm cgroup under /sys/fs/cgroup/memory, owned by root as we did not specify any custom permissions:

ls -l /sys/fs/cgroup/memory | grep pycharm
drwxr-xr-x  2 root root 0 Jun  1 08:27 pycharm

Assign all users’ PyCharm processes to the control group:
NB: Find out process’ path if it is not an alias. In this example pycharm → /usr/local/bin/pycharm → /opt/pycharm-community-2017.3.4/bin/pycharm.sh

*:pycharm   memory  pycharm

Start the service:

sudo systemctl start cgred

Check the cgroups are correctly configured i.e. launching pycharm populates the files in /sys/fs/cgroup/memory/pycharm:

#Before running pycharm
cat /sys/fs/cgroup/memory/pycharm/memory.usage_in_bytes
0
#Launch pycharm process
pycharm &
#Check mem usage file is populated
cat /sys/fs/cgroup/memory/pycharm/memory.usage_in_bytes
934385456

Example: Stress testing memory in a cgroup

It is a VERY good idea to stress test the defined RAM limits for a new cgroup.

This can be done by adding the stress command to the cgroup:

<user>:stress memory  <groupname>

Confirm that the stress process can reach the cgroup memory limit, e.g. 40% of total RAM:

cat /sys/fs/cgroup/memory/<groupname>/memory.usage_in_bytes
0
#Start stress process
stress --vm-bytes $(awk '/MemTotal/{printf "%d\n", $2 * (40 / 100);}' < /proc/meminfo)k --vm-keep -m 1
cat /sys/fs/cgroup/memory/pycharm/memory.usage_in_bytes
3879653376
#Use CTRL+C to stop the stress process

Confirm that the stress process is killed if it tries to use more RAM than permitted e.g. 80% of RAM:

cat /sys/fs/cgroup/memory/<groupname>/memory.usage_in_bytes
0
#Start stress process and confirm it is killed using sig 9
stress --vm-bytes $(awk '/MemTotal/{printf "%d\n", $2 * (80 / 100);}' < /proc/meminfo)k --vm-keep -m 1
stress: info: [89346] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [89346] (415) <-- worker 89347 got signal 9
stress: WARN: [89346] (417) now reaping child worker processes
stress: FAIL: [89346] (451) failed run completed in 6s

How to create a SSH tunnel through a Squid HTTP proxy on RHEL/CENTOS 6

Goal:

SSH to an external host outside an internal network by routing traffic through a Squid HTTP proxy, using a single line command.

Requirements:

The final command used must be able to be generated programmatically in the format ssh <user>@<external_host>, i.e. as if the internal host were connecting directly to the external host without passing through a proxy.

Prerequisites:

A Squid instance already configured and running using default settings on a RHEL/CENTOS 6 host.

Method:

Identify the public DNS name / IP address of the external host.

If a private key is required to ssh onto the external host:

Locate its path on the internal host you wish to connect from

Verify the file has permissions of 400 (i.e. read-only access to the user and nothing else); if not, either as root or the file’s owner, run

chmod 400 /path/to/key/pair.extension

Identify the user name used to connect to the external host.

On the external host, enable inbound SSH traffic from the internal host you are connecting from.

Configure the squid proxy to allow access to the external host if required:

ssh SQUIDPROXYHOST
sudo vi /etc/squid/SQUIDINSTANCE.conf

Add the following lines to the relevant sections of the config file:

acl <INTERNAL_IP_RANGE> src <XXX.XXX.XXX.XXX/XX>
acl <HOSTACL> dstdom_regex -i <External IP/domain name>
http_access allow <INTERNAL_IP_RANGE> <HOSTACL>
http_access allow <INTERNAL_IP_RANGE> CONNECT <HOSTACL>
:wq #save and quit

Reconfigure the squid proxy:

/usr/sbin/squid -k reconfigure -f /etc/squid/SQUIDINSTANCE.conf

Connecting to the external host via ssh:

Modify the sshd configuration file –

RHEL/CENTOS 6:

For a single user:

cp /etc/ssh/ssh_config ~/.ssh/config  #if the user does not already have this file
vi ~/.ssh/config
Host <External IP/domain name>
ProxyCommand /usr/bin/nc -X connect -x extproxy:3128 %h 22 #THIS IS RHEL7
:wq

System wide:

vi etc/ssh/ssh_config 
Host <External IP/domain name>
ProxyCommand /usr/bin/nc -X connect -x extproxy:3128 %h 22
:wq

RHEL/CENTOS 7:

For a single user:

cp /etc/ssh/ssh_config ~/.ssh/config  #if the user does not already have this file
vi ~/.ssh/config
Host <External IP/domain name>
ProxyCommand /usr/bin/nc --proxy extproxy:3128 %h 22 #THIS IS RHEL7
:wq

System wide:

vi etc/ssh/ssh_config -- IS THIS RHEL 6 or 7?
Host <External IP/domain name>
ProxyCommand /usr/bin/nc --proxy extproxy:3128 %h 22
:wq

Testing:

Run the ssh command to connect to the external host

ssh -i /path/to/key/pair.extension <user>@<External IP/domain name>

How to run multiple squid proxy 3.X instances on the same RHEL/CENTOS 6 host

Goal:

Have multiple squid proxy instances running on the same CENTOS/RHEL 6 host with different configurations, and have them start automatically on boot.

Motivation:

Running multiple squid instances is useful to avoid unnecessarily complex configuration files for a single instance, or to segregate traffic, for example if you wish to route development and lab traffic via two different proxy instances.

Prerequisites:

A squid instance already configured and running using default settings on a CENTOS/RHEL 6 host.

Method:

Here we run another instance of squid on the same IP but a different port than the default one, namely 3128; in this how-to I will use XX to designate the port number of any the extra squid instance.

Modifying the squid.conf file

Copy the original squid.conf file and open the copy in a text editor:

ssh <SQUID_HOST>
cp /etc/squid/squid.conf /etc/squid/<SQUIDINSTANCENAME>.conf
chown root:squid /etc/squid/<SQUIDINSTANCENAME>.conf
vi /etc/squid/<SQUIDINSTANCENAME>.conf

Modify or add the following directives in the file:

http_port XX
visible_hostname SQUIDINSTANCENAME #Optional, useful if adding the proxy to DNS as a CNAME
pid_filename /var/run/SQUIDINSTANCENAME.pid
access_log /var/log/squid/SQUIDINSTANCENAME.log squid
cache_log /var/log/squid/SQUIDINSTANCENAME.log

Modifying the sysconfig file

Copy the default sysconfig file and open it up in a text editor:

cp /etc/sysconfig/squid /etc/sysconfig/SQUIDINSTANCENAME
vi /etc/sysconfig/SQUIDINSTANCENAME

Modify the following line to point at the config file created above:

SQUID_CONF="/etc/squid/SQUIDINSTANCENAME.conf"

Modifying the startup script

Copy the original startup script and modify it, so the second instance can be started and stopped:

cp /etc/rc.d/init.d/squid /etc/rc.d/init.d/SQUIDINSTANCENAME
vi /etc/rc.d/init.d/SQUIDINSTANCENAME

Add the following lines to point at the sysconfig file created above:

if [ -f /etc/sysconfig/SQUIDINSTANCENAME]; then
 ./etc/sysconfig/SQUIDINSTANCENAME
fi

Add the following line to point at the configuration file created above:

SQUID_CONF=${SQUID_CONF:-"/etc/squid/SQUIDINSTANCENAME.conf"}

Use chkconfig to add the script to boot up process and confirm it was successfully added:

chkconfig --add SQUIDINSTANCENAME
chkconfig --list #look for SQUIDINSTANCENAME in the list

Testing the new instance runs as expected:

service SQUIDINSTANCE start

If “Starting squid …… OK” appears, you have successfully modified the startup script and are now running a second squid instance on your host.

This procedure can be repeated to run as many squid proxies as desired on a host.

Powered by WordPress & Theme by Anders Norén