Thursday 30 June 2011

DNS -- Configure a caching-only name server to forward DNS queries

Hot on the heels of my previous post comes this one. Assuming that you have followed the previous post simply, add the following lines to your /etc/named.conf file in the options section (change the ip address to whatever you dns server is):
forwarders {10.168.20.233;};
forward only;
Restart the bind daemon and off you go.

Note that since we are actually forwarding name queries, there is no need to modify the /var/named/named.ca file, like I had to do in the previous post.

DNS -- Configure a caching-only name server

I must confess, yet again, that I'm not 100% sure what this objective refers to. My understanding is as follows: A caching server is, as its name indicates, used to cache queries, therefore an authoritative server is needed to first provide the actual answer that will be cached by this server, so far so good. I think this is geared towards having a single DNS server within an organization, so that internet name queries are cached on this server.

My RHEL6 boxes don't have internet access, so this has been a little bit awkward for me to test. I essentially set up a master DNS server and then modified the /var/named/named.ca file in the caching name server, where I changed the ip address of one the servers to be my master dns server, like this:

M.ROOT-SERVERS.NET.     3600000 IN      A       10.168.20.233
I think I might be getting a little bit ahead of myself. Let's start from the beginning and install Bind:
yum install bind -y
You'll now need to edit the bind configuration file /etc/named.conf and make a few changes:
listen-on port 53 { any; };
allow-query     { any; };
Given the fact that I had not configured DNSSec properly I also commented the dnssec lines out.
/*      dnssec-enable yes;
        dnssec-validation yes;
        dnssec-lookaside auto;
*/
Ensure that the Bind daemon is set to run at boot time:
chkconfig named on
Open up the firewall and save the changes:
iptables -I INPUT -p udp --dport 53 -j ACCEPT; iptables -I INPUT -p tcp --dport 53 -j ACCEPT;service iptables save
You can now start named:
service named start
The best way to test this is to use dig and look at the times it takes to run a query. In my case, I can just turn off the master dns server and if the results are cached, then I will get a response, e.g.:
dig myserver.domain.com
;; Query time: 2 msec
;; SERVER: 10.168.20.234#53(10.168.20.234)
dig myserver.domain.com
;; Query time: 0 msec
;; SERVER: 10.168.20.234#53(10.168.20.234)
This feels a little bit unsatisfying, so I used the tc command to add a 200 milisecond delay to all traffic on eth0 (note that this is done in the master dns server)

tc qdisc add dev eth0 root netem delay 200ms
I bounced the caching server and tried again with dig:
dig myserver.domain.com
;; Query time: 202 msec
;; SERVER: 10.168.20.234#53(10.168.20.234)
dig myserver.domain.com
;; Query time: 0 msec
;; SERVER: 10.168.20.234#53(10.168.20.234)
A lot better this time :). It now makes a bit more sense to have a caching name server.

Note that the cache is stored in memory and therefore will disappear after a reboot of the server or of named itself, see here.

Also note, that there are no SELinux settings related to this objective and that in order to prevent hosts from accessing the service you should use an iptables rule.

Rng-Utils and Entropy RHEL6 style

I gave configuring DNSSEC a go last night but I had a bit of a problem. When I ran this command to create the key for my domain zone:
dnssec-keygen -a RSASHA1 -b 1024 -n ZONE domain
This was the result, for a while
Generating key pair.
It seemed to hang there. The problem turns out to be a lack of entropy, which can be checked with this:
cat /proc/sys/kernel/random/entropy_avail
73
It turns out that this is not good enough to generate a key, so the standard advice is to try to compile a kernel or generate some I/O work. Compiling a kernel was really not an option, so I tried to generate some I/O work but to no avail. After a googling for a bit I came across the rng daemon, which will generate a bit of entropy for you.
rngd -r /dev/urandom -o /dev/random -b
Now entropy in the system is:
cat /proc/sys/kernel/random/entropy_avail
3968
Which is enough to generate the key. Note that /dev/urandom is not truly random, as it will use SHA1 to generate random data when the entropy pool has been depleted, see this for a better explanation. However, this is good enough for my test system.

You need to install rng-tools in RHEL6 to use the rng daemon, note that it is no longer rng-utils.

Saturday 25 June 2011

HTTP/HTTPS -- Configure group-managed content

I must confess that I'm not sure what this objective refers to. I initially thought this referred to group authentication, however when I tried to find other what other people were saying I came up empty. This blog does not cover it, neither does this one. It does not seem to be covered by this book. I was about to give up, when I found this blog, where the objective is simply to set up a directory that is configured for collaborative editing.
Note that there is an error, step four should use chown rather than chgrp.

Friday 24 June 2011

HTTP/HTTPS -- Deploy a basic CGI application

This is actually a surprisingly easy objective to achieve. Create a script in the /var/www/cgi-bin directory, like this and call it uptime.cgi:
#!/bin/bash
echo "Content-type: text/html"
echo ""
echo "Uptime is:  $(uptime)"
If you move/copy the script from a different directory or you use a different directory, the SELinux context is likely to be wrong and will need to be changed, so bear that in mind.

Make the script executable:
chmod +x uptime.cgi
You can now test your new cgi script with:
elinks 127.0.0.1/cgi-bin/uptime.cgi
You might want to add the following directives to a different directory to enable script execution and allow other script extensions.
Directory Options +ExecCGI 
AddHandler cgi-script pl cgi
Note that the . before the file extension is not needed and that the extensions are case insensitive.

HTTP/HTTPS -- Configure private directories

I'm not 100% sure whether this objective refers to making the home directory of system users available via Apache or simply to configuring a private area, whose access is controlled via user name. I will cover the former in this post and refer you to this post for the latter.

Again we'll be editing the httpd config file (etc/httpd/conf/httpd.conf). Make sure that you have the following directives set:
UserDir public_html
 #  UserDir disabled
And then simply uncomment the example provided, which will give you read access to the user files:
<Directory /home/*/public_html>
    AllowOverride FileInfo AuthConfig Limit
    Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec
    <Limit GET POST OPTIONS>
        Order allow,deny
        Allow from all
    </Limit>
    <LimitExcept GET POST OPTIONS>
        Order deny,allow
        Deny from all
    </LimitExcept>
</Directory>
You'll now need to create a public_html directory for all users and make sure that permissions and SELinux are configured correctly. This is for a user called myuser.
mkdir /home/myuser/public_html;chmod 701 /home/myuser; chmod 705 /home/myuser/public_html
Now create a test page and give it the right permissions:
echo 'A Simple User Page' >> public_html/index.html; chmod 604 public_html/index.html
Finally, set the SELinux settings to enable home directories:
setsebool -P httpd_enable_homedirs 1
and change the user contexts to the Apache user context (you can get this command from the manual page for httpd_selinux):
 chcon -R -t httpd_sys_content_t /home/myuser/public_html
Restart Apache and you should be able to visit myuser's fancy page:
elinks 127.0.0.1/~myuser
You can create by public_html directory and even a simple page for all new users by modifying the skeleton directory, like so:
mkdir /etc/skel/public_html
echo 'A Simple User Page' >> public_html/index.html;
chmod -R 705 /etc/skel/public_html/
This only helps for new users, but for existing users the process could be scripted like this:
#!/bin/bash 
if [ -n "$1" ]
then
  user=$1
else
   echo "Usage prepare username"
exit
fi

##Set appropriate permissions for home directory
chmod 701 /home/$user

##Create public_html
mkdir /home/$user/public_html

##Create Index.html file
echo "A Simple User Page for $user" >> /home/$user/public_html/index.html;

##Change permissions and ownership
chown -R $user:$user /home/$user/public_html
chmod -R 705 /home/$user/public_html/

##Change SELinux context
chcon -R -t httpd_sys_content_t /home/$user/public_html
This script can be improved by looping through the accounts and checking that public_html does not exist, but it does the work.

Thursday 23 June 2011

HTTP/HTTPS -- Configure a virtual host

If you are coming from a Windows background virtual hosts are the equivalent of hosting several websites using host headers.
In my case I have created a couple of CNAME aliases on my DNS server for 10.168.20.225, so that rhel6virtual.dev.com and rhel6morevirtual.dev.com both point to 10.168.20.225, the ip address of the Apache server. You can replicate this by modifying your /etc/hosts file if you don't want to be using a DNS server. Note that this needs to be added to the client too.

I can now edit the Apache config file (/etc/httpd/conf/httpd.conf) like this:
NameVirtualHost *:80

<VirtualHost *:80>
    ServerAdmin webmaster@dummy-host.example.com
    DocumentRoot /var/www/rhel6virtual/
    ServerName rhel6virtual.dev.com
    ErrorLog logs/rhel6virtual
    CustomLog logs/rhel6virtual common
</VirtualHost>
<VirtualHost *:80>
    ServerAdmin webmaster@dummy-host.example.com
    DocumentRoot /var/www/rhel6morevirtual
    ServerName rhel6morevirtual.dev.com
    ErrorLog logs/rhel6mv
    CustomLog logs/rhel6mv common
</VirtualHost>
I now create the DocumentRoot directories:
mkdir /var/www/rhel6virtual; mkdir /var/www/rhel6morevirtual
and add a file to each directory to allow easy testing:
  echo "More Virtual" > /var/www/rhel6morevirtual/index.html; 
  echo "Virtual" > /var/www/rhel6virtual/index.html
You can now restart Apache:
httpd -k restart
So now if you visit http://rhel6virtual.dev.com/index.html you'll see a web page that simply says Virtual and if you visit http://rhel6morevirtual.dev.com/index.html you'll see a web page that simply says More Virtual.

Installing Apache has already been covered here. You can check the rather long list of SELinux settings with:
getsebool -a | grep httpd
For an explanation of what each settings does, check this manual page out:
man httpd_selinux
In order to prevent access to the websites you can use iptables (don't forget to save the configuration), e.g.
 iptables -I INPUT -p tcp --dport 80 -s 10.168.20.0/24 -j DROP
or you can edit the configuration file for Apache, add the following to the second virtual host from above:
 <Directory "/var/www/rhel6morevirtual/">
         Options            Indexes FollowSymLinks
         AllowOverride      None
         Order              deny,allow
         Allow              from 10.168.20.203
         Deny from all
    </Directory>
Only 10.168.20.203 can see rhel6morevirtual now.

In order to prohibit users from accessing the web server, you first need to allow users to use it, so add a user and password with the following command (The -c creates the file, so it's only needed the first time):
 htpasswd -cm /etc/httpd/conf/apachepass myuser
Now, edit the Apache config file and inside the directory directive for "/var/www/rhel6morevirtual/" add:
AuthType Basic
AuthName "Restricted Files"
AuthUserFile /etc/httpd/conf/apachepass
Require user myuser
Restart Apache and now the only user that can see rhel6morevirtual will be myuser.

Note that an alternative to this method is to use the .htaccess file. In this method  we create an .htaccess file on the target directory /home/myuser/public_html/ in my case.

Edit the .htaccess file and enter the following :
AuthType Basic
AuthName "Restricted to myuser"
AuthUserFile /home/myuser/public_html/.htauthusers
Require valid-user
You now need to run:
htpasswd -c .htauthusers myuser
If you try to visit the page, you'll be prompted for a username and password. The beauty of this method is that it allows users without root access to restrict access to "their" web site.

Wednesday 22 June 2011

Configure a system to accept logging from a remote system

As I said in my previous post this should be combined with this objective, Configure a system to log to a remote system. At any rate, in order to configure a system to accept logging from a remote system, you need to edit the /etc/rsyslog.conf file.

Remove the comments from these lines to activate TCP remote logging.
#$ModLoad imtcp.so
#$InputTCPServerRun 514
so that they look like this
$ModLoad imtcp.so
$InputTCPServerRun 514
Open the firewall and save the configuration change to the firewall:
iptables -I INPUT -p tcp --dport 514 -j ACCEPT; service iptables save
All that remains is to restart the logging daemon:
service rsyslog restart
Note that you could use UDP instead or as well as TCP. The rsyslog manual is your friend.

Configure a system to log to a remote system

I sometimes wonder about Red Hat. This objective is meaningless without the next objective, Configure a system to accept logging from a remote system, so why just not combine them? I guess that if they did that, it would cut down in the number of objectives and make the exam look too easy, who knows?

At any rate, provided that you have achieved the next objective [sic] this one should be very easy to achieve. You need to edit the /etc/rsyslog.conf file and add the following line at the end of it:
*.* @@10.168.20.233:514
The above assumes that you want to log everything to a server on 10.168.20.233 on port 514. It is of course possible to simply log a category, e.g. mail, cron, authentication, etc.. Use the next line to log authentication to the same server:
authpriv.*                                            @@10.168.20.233:514
All that remains is to restart the logging daemon:
service rsyslog restart
You can test that the system is logging to the remote server with:
logger "remote logger"
Incidentally, the logger command is very good for adding logging to bash scripting, so be sure to remember it.

Use shell scripting to automate system maintenance tasks

This is by far the most open objective for any of the two exams. Frankly, I think this objective should be better defined. Yes, RHCEs should be expected to know a significant amount of shell scripting but by leaving this objective defined in such a open fashion it risks people spending too much time on time, probably not a bad thing in itself, or none at all.

At any rate, if you have never done any shell scripting, I recommend this tutorial, continued here. If you are familiar with the basics or even if you are not, you can follow this guide. I've also been looking at this book. I don't endorse it or not endorse it, it just came up on a search of books24x7.com.

Saturday 18 June 2011

Produce and deliver reports on system utilization (processor, memory, disk, and network)

This is a relatively easy objective. You need to use the sar command. This command has a large number of flags that can be combined. If you want to get everything, hopefully you won't have to analyse it :), just use:
sar -A
In order to get processor utilization statistics use simply use (note that cpunumber is zero based, in other words the first cpu is 0) :
sar
or
sar -P cpunumber
or
sar -u ALL
In order to get memory usage statistics, use:
sar -rR
To get swap usage statistics, use:
sar -S
Arguably paging usage statistics are part of the memory, so:
sar -B
You can get disk usage statistics with:
sar -b
or
sar -d
or even
sar -dp
Finally network usage statistics can be obtained with (note that there are over 15 different keyworks, e.g. ICMP, EICMP,TCP, ETCP etc, and you can normally add an E to the keyword to get the error statistics):
sar -n DEV
and
sar -n EDEV
Unfortunately, the manual page is very long and unwieldy, so it might be a good idea to remember some of these by heart for the exam.

Configure a system as an iSCSI initiator that persistently mounts an iSCSI target

The first time I tried to do this I used FreeNAS, if anything because my girlfriend thinks that Beastie is cute, but there is no need to use it in order to set up a iSCSI target, Red Hat can do it as well. Thus, even though, it is not part of the objectives the first part of this post will be to set up a iSCSI target. Once the target has been set up, I'll show you how to mount it persistently, which is the objective.

Setting up a iSCSI target, Red Hat style:

First you need to install the necessary packages:
yum install scsi-target-utils -y
Start the target daemon and set it to start on boot:
service tgtd start; chkconfig tgtd on
You can now add a target, using the tgtadm command. Oddly enough, the help command is actually extremely helpful, in that it essentially gives you examples rather than a list of what each flag does. Not that there is anything wrong with a list of what each flag does, but sometimes more examples would not go amiss, anyway, create the target with:
tgtadm --lld iscsi --mode target --op new --tid=1 --targetname iqn.3141.15.domain.com:test
You then need to add a LUN to this target and tell it what storage it should use:
 tgtadm --lld iscsi --mode logicalunit --op new --tid=1 --lun=1 --backing-store=/dev/sdb
All that remains is to enable the target to accept initiators:
tgtadm --lld iscsi --mode target --op bind --tid=1 --initiator-address=ALL
You can check port 3260 with (thanks to this daniel miessler for this post):
lsof -i:3260
If you don't see any output then something has gone wrong. You can list the targets with this command: 
tgtadm --lld iscsi --op show --mode target
Make sure that you open port 3260 on your firewall and save it:
iptables -I INPUT -p tcp --dport 3260 -j ACCEPT; service iptables save
That should be it, you now have an iSCSI target ready to be mounted. Note that this will allow anybody to mount your target, so it is really only good as a internal test.

Configure a system as an iSCSI initiator that persistently mounts an iSCSI target

This is the actual objective. You need to install the relevant packages first:
yum groupinstall 'iSCSI Storage Client'
Strangely, this group seems to consist solely of iscsi-initiator-utils. Anyway, start the iscsi service and set it to start on boot:
 /etc/init.d/iscsi start; chkconfig iscsi on
You can now look for targets with this, where 10.168.20.233 is the ip address of the server hosting the iSCSI targets:
iscsiadm -m discovery -t sendtargets -p 10.168.20.233
Restart the iscsi service:
service iscsi restart
Check that you see the new device with:
fdisk -l
You can now use the new disk to create partition(s) and file systems for those partition(s) and mount it (them).

A note of caution, as this is a network device, you need to make sure that you specify it in the fstab options, otherwise your system will refuse to boot up, something like this is needed
UUID=54e1bd41-68d0-4804-94f8-1b255e53a88d /iscsi ext4 _netdev 0 0

Friday 17 June 2011

Build a simple RPM that packages a single file

I find this objective bizarre, although perhaps I should be used by now that some objectives just don't make sense. In this case, I just think that this is something that it is unlikely to be done by system admin. To be fair it is much cooler to give somebody an rpm with your scripts but still.

The first thing is to install the rpmdevtools package:
yum install rpmdevtools -y
You now need to create the source tree, which you can do manually (You'll need BUILD,BUILDROOT, RPMS, SPECS,SOURCE,SRPMS) or you can just use:
rpmdev-setuptree
Since the objective calls for a least one file to be packaged, let's just create a script, say greetings.sh, and copy it to its own folder in the home/user/rpmbuild/SOURCES directory:
echo 'echo "hello `whoami`";' >> greetings.sh; chmod +x greetings.sh ; mkdir /home/user/rpmbuild/SOURCES/greet-1.0; cp greetings.sh /home/user/rpmbuild/SOURCES/greet-1.0
Create a tarball out of the directory with the greetings.sh script, from SOURCES directory:
tar -czvf greet.tar.gz greet-1.0
Now, you need a spec file and you can create a sample spec file with, again from SOURCES directory:
rpmdev-newspec ../SPECS/greet.spec
You can now edit this file to this:
Name:           greet
Version:       1.0
Release:        1%{?dist}
Summary:        Greets the invoker

Group:         Greetings Group
License:        GPL
URL:          http://www.sgniteerg.com
Source0:        greet.tar.gz
BuildRoot:      %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)


%description

%prep
%setup -q
%build
%install
install -m 0755 -d $RPM_BUILD_ROOT/opt/greet
install -m 0777 greetings.sh $RPM_BUILD_ROOT/opt/greet/greetings.sh

%clean
#rm -rf $RPM_BUILD_ROOT

%files
%dir /opt/greet
/opt/greet/greetings.sh

%defattr(-,root,root,-)
%doc
%changelog
You can build this file with from the SPECS directory:
rpmbuild -bb greet.spec
This will create an rpm, greet-1.0-1.el6.x86_64.rpm, in the RPMS/x86_64 directory of your buildtree, which you can then proceed to install. If you are not using the x86_64 architecture, the directory will be different. You could add BuildArch: noarch to the spec file.

You can now package all your scripts and pass them to your friends as an rpm file. How cool is that?

Configure system to authenticate using Kerberos

This objective is not very well defined or at least I don't understand what Red Hat is aiming at here.  There is no mention of using Kerberos for anything once the system is configured to authenticate with Kerberos or what directory service should be used for accounts or indeed whether a directory server should be used at all. The other issue that I see, is the fact that you need user principals to be able to do anything, you might be supplied these in the exam, who knows?

Anyway, assuming that you have a working KDC server, see my post on openSSH with Kerberos for details of configuring a KDC, you can use authconfig-tui to configure Kerberos on your client.


You can then check the /etc/krb5.conf file should now be modified to:
[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 default_realm = DOMAIN.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true

[realms]
 DOMAIN.COM = {
  kdc = yetanother.domain.com
  admin_server = yetanother.domain.com
 }

[domain_realm]
 .domain.com = DOMAIN.COM
 domain.com = DOMAIN.COM
Unfortunately, this will not actually do much.
kinit
kinit: Client not found in Kerberos database while getting initial credentials
As mentioned above you need a user principal in order to get a ticket and a user to be able to do anything useful. Let's say that you have openLDAP configured (have a look at this post if in doubt) and you have a user account called crap in that domain. Assuming that a principal for crap exists and you know the password you can just do:
kinit crap
and provided that you typed the right password, you'll get a ticket, check with:
klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: crap@DOMAIN.COM

Valid starting     Expires            Service principal
06/17/11 11:59:53  06/18/11 11:59:53  krbtgt/DOMAIN.COM@DOMAIN.COM
        renew until 06/17/11 11:59:53
Have a look at my previous post to configure openSSH to work with Kerberos.

Use Kerberos to authenticate OpenSSH - RHEL6

In order to achieve this you'll need two machines. They can be VMs or actual physical boxes. Ideally, you want to set up Kerberos in conjunction with an LDAP Directory, Windows Active Directory will do just that and I plan on investigating how to get single sign-on working with a Windows AD domain, but in actual fact, you don't need an LDAP directory, you can just as easily use local users, but I'm getting ahead of myself.

In order to make my life easier, I have created a new zone in my DNS server, called domain.com, and I have added both the kdc server and the client to this zone. I then edited the /etc/resolv.conf file to point to my DNS server. This is actually not needed and you can use the /etc/hosts file instead, just make sure that you have entries for both the kdcserver and the client on both the kdcserver and the client. Also make sure that the entries are of the form:
ipaddress fqdn hostname
In my case the kdc server is called yetanother.domain.com and the client another.domain.com, so bear that in mind, when running through the instructions.

Logged in as root on  yetanother.domain.com run:
  1. yum install krb5-server -y  -- To install the KDC
  2. kdb5_util create -s -- To create the KDC database.
  3. edit /var/kerberos/krb5kdc/kadm5.acl and change the Realm to DOMAIN.COM -- To enable administration of the database.
  4. edit /etc/krb5.conf and change references to example.com to domain.com. Note that you should respect capitalization, e.g. EXAMPLE.COM should be changed to DOMAIN.COM and example.com should be changed to domain.com -- This is the client configuration.
  5. kadmin.local -q "addprinc root/admin" -- add an administrator to kdc.
  6. service krb5kdc start -- self explanatory.
  7. service kadmin start -- self explanatory.
  8. kadmin.local -q "addprinc -randkey host/yetanother.domain.com" -- add kdc principal to kdc.
  9. kadmin.local -q "addprinc -randkey host/another.domain.com" -- add client principal to kdc.
  10. ktutil. While on ktutil shell: ktadd -k /etc/krb5.keytab host/yetanother.domain.com -- add kdc principal to keytab file.
  11. Add a principal that corresponds to a user account. kadmin.local -q "addprinc crap" -- add a user.
  12. Ensure that  the openSSH daemon will accept GSSAPI as an authentication method. Make sure that the following lines are not commented out. -- Configure the openSSH daemon.
    GSSAPIAuthentication yes
    GSSAPICleanupCredentials yes
  13. service sshd restart --Restart the openSSH daemon.
If you are not using an LDAP directory for user accounts, make sure that the user crap exists in both server and client.

Logged on as root in another.domain.com, ensure that the date and time are the same as on yetanother.domain.com and run:
  1. ktutil. While on ktutil shell: ktadd -k /etc/krb5.keytab host/another.domain.com
  2. Add the following lines to the ssh client config file (etc/ssh/ssh_config).
    Host *.domain.com
    GSSAPIAuthentication yes
    GSSAPIDelegateCredentials yes
  3. Ensure that the /etc/krb5.conf file is identical to the one in yetanother.domain.com
  4. su crap
  5. kinit -- Get a kerberos ticket.
  6. ssh yetanother
This will log you in to yetanother.domain.com as the user crap.

In general, it makes sense to use an LDAP directory together with Kerberos authentication as otherwise you will need to have a user account in each server. So that I would have needed to add a user account called crap to another.domain.com. This is not a very onerous task, but if you have to do it for many servers it gets boring quickly.

To add more servers simple create a principal for the server (kadmin -q "addprinc -randkey host/servername.domain.com"), then add that principal to the krb5.keytab of the server (servername.domain.com) by running: ktutil. While on ktutil shell: ktadd -k /etc/krb5.keytab host/servername.domain.com. Make sure that the kerberos configuration file /etc/krb5.conf is correct and finally make sure that the openSSH client and daemon are configured correctly, see above.

Wednesday 15 June 2011

Use /proc/sys and sysctl to modify and set kernel run-time parameters

This is a fairly simply objective, discussed in the previous post.
The kernel run-time parameters can be listed with:
sysctl -a
In order to change a setting temporarily:
sysctl net.ipv4.ip_forward=1
or
echo 1 > /proc/sys/net/ipv4/ip_forward
In order to make the changes permanent you need to edit the /etc/sysctl.conf file. Simply add the value you want, e.g. net.ipv4.ip_forward=1, save it and then issue the following command:
sysctl -p

Use iptables to implement packet filtering and configure network address translation (NAT)

I'll start with the second part of this objective as it is the more concretely defined.
In my case, I will be using two vlans instead of two actual interfaces for reasons that would take, way, way too long to explain.
I have a bunch of servers on network 10.168.20.0 and they want to communicate with servers on network 10.10.11.0. In this configuration, 10.168.20.0 could be thought of as my local network and 10.10.11.0 as the internet.

The gateway server has eth1.11 with ip address 10.10.11.16 and eth1.10 with ip address 10.168.20.227. If you are wondering what the .11 and .10 mean, well, they are tagged (VLan) traffic, have a look here for some details. In a more standard configuration you would probably use eth0 for local and eth1 for internet, so change commands below accordingly.

On the gateway server, we need to modify the iptables rules as follows:
  1. iptables -t nat -I POSTROUTING -o eth1.11 -j MASQUERADE
  2. iptables -I FORWARD -i eth1.10 -o eth1.11 -j ACCEPT --comment "accept everything on the way out"
  3. iptables -I FORWARD -o eth1.10 -i eth1.11 -m state --state RELATED,ESTABLISHED -j ACCEPT -m comment --comment "accept related or established on the way back"
  4. service iptables save
The first rule modifies the packets so that they are returned to the the original server.
The second rule will forward any traffic coming from eth1.10, i.e. the local network to eth1.11, ie. the "internet".  You don't need the comments, obviously.
Finally, the third rule will forward the packages on their way back from the internet to the local network, note that no new connections will be forwarded, to prevent connections being forwarded that were not initiated from a server in the local network.

You now need to allow the gateway to forward ip packets and this can be done by modifying the /etc/sysctl.conf file. Look for this line net.ipv4.ip_forward = 0 and change its value to net.ipv4.ip_forward = 1

Issue the following command to reload the sysctl.conf file:
sysctl -p
You can check that the changes have taken place with:
sysctl net.ipv4.ip_forward
Your server is ready, you just need to make sure that the default gateway is set to this server in the clients, see my previous post for details on how to do this.

NAT done and dusted, lets have a look at packet filtering. This is such an open ended objective that it is hard to see what is been asked of the candidate, I have touched on iptables in a previous post, so I'll be brief here.
Say you want to prevent an ip addresses from accessing your server, in case they are trying a rudimentary DOS attack
iptables -I INPUT -p tcp --dport 80 -s 10.168.20.225 -j REJECT
You can block a whole network, just change the -s parameter to say, 10.168.20.0/24. You can use a similar rule to allow access from particular ip addresses or networks (make sure that there are no spaces between the ip addresses or networks) :
 iptables -I INPUT -p tcp --dport 80 -s 10.168.20.225,10.168.20.226 -j ACCEPT
Similarly, you could create a single rule for several services (say http, https):
 iptables -I INPUT -p tcp -m multiport --dports 80,443  -j ACCEPT
As you can imagine, this barely touches the surface of what iptables can do, but it gives you an idea.

Route IP traffic and create static routes

When I first read this objective I immediately thought of the routing table. It turns out that in Linux land there is no -p command to make the routes persistent, instead they need to be written to /etc/sysconfig/network-scripts/route-interface, where interface is the name of the interface, e.g. eth0.

There are two main ways of setting a route with this method, assuming you want the routes set for eth0.
1. echo "10.10.11.0/24 via 10.168.20.227 dev eth0" >> /etc/sysconfig/network-scripts/route-eth0
2. echo "10.10.11.0/24 dev eth0" >> /etc/sysconfig/network-scripts/route-eth0
You can activate the routes with the following command:
/etc/sysconfig/network-scripts/ifup-routes eth0
The first way will provide a route to the 10.10.11.0 network  and set 10.168.20.227 as the gateway for that route, in other words, it expects 10.168.20.227 to be able to route those packages to the 10.10.11.0 network (or at least to forward them to a server/router that can), you can check the routing table in a myriad of ways, for instance (only showing relevant line):
netstat -nr
Kernel IP routing table
Destination        Gateway             Genmask     Flags MSS Window irtt Iface
10.10.11.0       10.168.20.227     255.255.255.   0   UG 0 0 0 eth0
The second way will provide a similar route to the 10.10.11.0, but will not set a gateway for that route. So that instead of sending the packages to the gateway, it will simply send them directly to the 10.10.11.0 network.
route -n 
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.10.11.0      0.0.0.0         255.255.255.0   U     0      0        0 eth0
For completeness, the commands needed to achieve the same as above are the following:
route add -net 10.10.11.0 netmask 255.255.255.0 gw 10.168.20.227 eth0 

route add -net 10.10.11.0 netmask 255.255.255.0 eth0
Note, that a reboot will clear these from the routing table, so you should use them only for testing before writing them to the interface route file.

There is a different way of routing with iptables, you can have a look at this post, however I don't think this is what Red Hat had in mind with this objective.

Monday 13 June 2011

Diagnose and address routine SELinux policy violations

You have three main tools for diagnosing SELinux policy violations:
  1. audit log (/var/log/audit/audit.log)
  2. ls -Z
  3. ps -AZ
I think that if you have realized that the issue lies with SELinux that is half the battle and the above can help you with that.

In order to address the policy violations that you might encounter, you will need the audit2why and audit2allow commands. You'll need to install policycoreutils-python
yum install policycoreutils-python
To illustrate how to use this, set SELinux to enforcing:
setenforce 1
Save your iptables configuration to a file:
iptables-save >myiptables.txt
This file is empty, so check the audit log and you'll see the following message:
type=AVC msg=audit(1307819809.595:16342): avc:  denied  { write } for  pid=22969 comm="iptables-save" path="/root/mytables.txt" dev=sda3 ino=144189 scontext=unconfined_u:unconfined_r:iptables_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:admin_home_t:s0 tclass=file
Copy this line to a file, say iptables.audit and run:
audit2why < iptables.audit
You'll get this ouput:
 Was caused by:
                Missing type enforcement (TE) allow rule.

                You can use audit2allow to generate a loadable module to allow this access.
This confirms that the issue is with SELinux, so now let's resolve it:
audit2allow -M iptables -i iptables.audit
This will create a module called iptables.pp, that can be installed with this command:
semodule -i iptables.pp
Now you can safely save your iptables configuration.

As mentioned in a previous post, you should actually set SELinux to permissive in dev/testing as you might have more than one SELinux policy violation and then you'll end up creating loads of modules unnecessarily.

Use boolean settings to modify system SELinux settings

In order to list the SELinux settings you can use this command:
getsebool -a
Since SELinux settings don't really have catchy names, your best bet is using grep in conjunction with the -a switch, e.g to find all SELinux settings related to ssh: 
getsebool -a | grep ssh
You can now use the setsebool command to change the settings like this:
setsebool -P selinuxsetting boolean
where boolean is 1 to switch on and 0 to switch off.

Alternatively, you could you use tooglesebool, which flips the value.
e.g.
[root@centos1 examples]# getsebool -a | grep virt_use_nfs
virt_use_nfs --> off
[root@centos1 examples]# togglesebool virt_use_nfs
virt_use_nfs: active
[root@centos1 examples]# getsebool -a | grep virt_use_nfs
virt_use_nfs --> on

Restore default file contexts

Another easy objective, yay!!!

To restore default file contexts use:
restorecon  -vv filename

List and identify SELinux file and process context

You'll need to use the -Z switch for this objective.
Thus in order to list SELinux files' context I normally use:
ls -lZ
and to list processes' context:
ps -AZ

Set enforcing and permissive modes for SELinux

You can check the current SELinux status with:
getenforce
You can also look at /etc/selinux/config, which will tell you the status at boot time. This does not necessarily mean that it is the current SELinux status, because you can switch it off on the fly by issuing the following command:
echo 0 >/selinux/enforce
or this command:
setenforce 0
Similarly, you can switch it back on with:
echo 1 >/selinux/enforce
or this command:
setenforce 1
 Let's get back on track and look at the objective. You'll need to set the appropriate value for this line in the /etc/selinux/config file. So for enforcing mode, you'll have:
SELINUX=enforcing
and for permissive you'll have:
SELINUX=permissive
In development/test permissive mode should be used, so that you can diagnose and fix failures, in production you should use enforcing.

Sunday 12 June 2011

Configure firewall settings using system-config-firewall or iptables

Since advanced iptables settings (routing, NATing) are covered in the RHCE exam, I assume that this objective relates to allowing services through the firewall.
If you have been following this blog, and who hasn't?, then you'll already be somewhat familiar with the iptables command, but I'll expand here a little bit on some of the commands, however first let's have a look at system-config-firewall.
This is the main screen:
Once you have allowed the services you want through the firewall, click Apply.
Note that this will essentially overwrite the current iptables configuration. If you are only using system-config-firewall then this is of no concern to you, so go ahead and press yes.

As with most GUI tools, it is fairly simple to use and there is not much to be said here, so let's turn our attention to iptables.

The iptables command is very powerful and can do a lot of things and thus it can be fairly complex, but my reading of this objective is that only the basics are needed, so let's get started:
iptables -F
this will clear your iptables configuration, which will allow any traffic through, you can check that the firewall rules are empty with this command:
iptables -nvL
Now, let's block all traffic:
iptables -I INPUT -j DROP
Needeless to say that you should not perform this command remotely, as it will block your remote connection. You can use REJECT instead of DROP, where the former replies to client and the latter doesn't, check the iptables manual for longer and better explanation.

Let's allow ssh connections:
iptables -I INPUT -p tcp --dport ssh  -j ACCEPT
Note, that if you use -I iptables will insert the line to the top of the chain, if you want to add it to the bottom of the chain you can use -A instead.

Note that the above will only allow ssh traffic for connections that have been established to this server and not from this server to another server. The reason for this is best explained with the ouput of the netstat -ant command:
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address               Foreign Address             State
tcp        0      0  0.0.0.0:22                              0.0.0.0:*                       LISTEN
tcp        0      0 10.168.20.221:22            10.168.20.227:34492         ESTABLISHED
tcp         0 0     10.168.20.221:44334        10.168.20.225:22            ESTABLISHED
You can see that the local server (10.168.20.221) is listening to any address on port 22 and you can also see that a connection has been established to the local server on port 22 from 10.168.20.227 on port 34492. The line below shows the opposite, a connection has been established from port 44334 on the local server to 10.168.20.225 on port 22. This is essentially how network sockets work, the service listens on a pre-established port, 22 in this case, and the actual connection takes place in one of the ephemeral ports, remember that each connection needs a socket, so that if you connected to port 22, then nobody else would be able to connect to that socket and thus no more connections to the server, not very useful, right? So what can you do, just add a rule like this:
iptables -I INPUT -p tcp -m state --state  RELATED,ESTABLISHED -j ACCEPT
Note that in order for a connection to be established in needs to be initiated from the client and thus should not present any risks if your server has not been compromised.

A better rule would use the source port and input interfaces flag --sport and -i respectively, so that only SSH connections are allowed, like this:
iptables -I INPUT -i eth0 -p tcp --sport 22 -m state --state  RELATED,ESTABLISHED -j ACCEPT
Remember that you need to save the rules as otherwise they will be lost after a reboot, a new way of saving them not discussed before:
service iptables save 
I have already provided examples of rules for web and ftp servers in previous posts. A couple more commands to finish, the first one is how to delete rules.
iptables -D chainname rulenumber
and the second one is how to zero the counters, this can be helpful with troubleshooting sometimes:
iptables -Z chainname

Configure a system to use an existing LDAP directory service for user and group information

In theory this should not be an overly complicated objective, in practice it all depends on your existing LDAP service.

I'm using a windows 2003 box as my primary domain controller and it took me the best part of 2 days to work it all out, as this is unlikely to be what Red Hat had in mind when setting this objective, I installed openLDAP and what a bundle of joy that turned out to be. I'll post how I did it at some other point.
Assuming, like the objective states that there is a working LDAP service then we can use a TUI tool (authconfig-tui) to configure this. There is of course a GUI tool (system-config-authentication) too, feel free to use it if you like it better or even the full on command authconfig.

Let's get started by installing the necessary packages:
yum install openldap{,-clients,-devel,}
Now run the authconfig-tui tool:



When you exit this tool, the System Security Services Daemon (sssd)  and the local LDAP name service daemon (nslcd) should start, I say should because for some reason nslcd refused to start. Similarly, the /etc/nsswitch.conf file has sometimes the wrong configuration.
passwd:     files sss
shadow:     files sss
group:      files sss
You need to change the above three lines to:
passwd:     files ldap
shadow:     files ldap
group:      files ldap
You can now (re)start nslcd, needless to say that you should make sure that the services (sssd, nslcd) will run after a reboot (e.g. chkconfig nslcd on).

If you get the list of system users, you should now be able to see domain users:
getent passwd
You can check the domain users with this command to compare them with the output of the command above:
ldapsearch -xb "dc=domain,dc=com" "objectclass=account"
I must say that this objective seems a little bit more complicated than the average objective. I might give authconfig a try to see if it is less fiddly.

Saturday 11 June 2011

Create, delete and modify local groups and group memberships

Another reasonably simple objective:

Create groups:

To create a group simply run the following command:
groupadd groupname
Delete group:

To delete a group simply run the following command:
groupdel groupname
Modify groups and group memberships:

The first objective can be achieved with the groupmod command, whereas the second can be achieved with usermod as explained in the previous post, or with the groupmems command.

Say you want to change the name of a group:
groupmod -n newgroupname oldgroupname
or change the group id :
groupmod -g 1234 groupname
Group memberships can be changed with the groupmems command. To add users to a group issue the following command:
groupmems -g groupname -a username
and to delete them from the group:
groupmems -g groupname -d username
You can purge a group with the -p switch and the -l switch will list the users belonging to that group, e.g.:
groupmems -g groupname -l
The group file is in /etc/group, but you could also list its contents with the getent command:
getent group
This commands works for users (passwd or shadow), hosts and a few others, as detailed in the man page.

Change passwords and adjust password aging for local user accounts

For this objective you will need to familiarize yourself with the passwd command.

Change passwords for local user accounts:
passwd username
As with most commands there are plenty of options, but the above command will prompt you to enter a new password for the account username.

Adjust password aging for local user accounts:

The passwd command  can be used to achieve this objective. Say, you want a user password to expire in 30 days:
passwd -x 30 username
This can, sort of, also be achieved, with chage:
chage -E 'Jul 11, 2011' username
This command will actually disable the account in 30 days, so perhaps it is not quite what the objective calls for, but interesting to know nonetheless.

The following command will let you see the account status:
passwd -S username
You can also get relevant information regarding account aging with:
chage -l username

Create, delete, and modify local user accounts

Another reasonably simple objective:

Create user accounts:

By default when creating users in Red Hat a group with the username will also be created and this will be the main group for the user, with a group id higher than 500. To create a user in this way, and also accepting all other defaults (create user directory, bash as the user shell, etc) simply do:
useradd username
 You can check that a new group has also been created:
getent group | grep username
 Alternatively, you can assign the user to an already existing group:
useradd -g groupname username
There are quite a number of options when creating users, so have a look at them.

Delete user accounts:

This one is fairly simple:
userdel -r username
Note that this will remove the home directory and mail spool for the user, if you want the directory and mail spool to remain, just user:
userdel username
Modify user accounts:

The command used to modify accounts, usermod, is very similar to the one used to add them. Say you want to change the user's shell:
usermod -s /bin/sh username
or make the user member of a couple of extra secondary groups:
usermod -aG groupname1,groupname2 username
 or change the username:
usermod -l newusername oldusername

Modify the system bootloader

This is a fairly open objective, Modify the system bootloader to do what?

As discussed in the previous post, you need to modify the grub.conf file, which by default is in the /boot/grub/ directory.

A sample grub.conf file is listed below:
# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /boot/, eg.
#          root (hd0,0)
#          kernel /vmlinuz-version ro root=/dev/sda3
#          initrd /initrd-[generic-]version.img
#boot=/dev/sda
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux Server (2.6.32-131.0.15.el6.x86_64)
        root (hd0,0)
        kernel /vmlinuz-2.6.32-131.0.15.el6.x86_64 ro root=UUID=e9b35e95-5634-4891-8854-e4053f3fb350 rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=uk crashkernel=auto rhgb quiet
        initrd /initramfs-2.6.32-131.0.15.el6.x86_64.img
title Red Hat Enterprise Linux (2.6.32-71.el6.x86_64)
        root (hd0,0)
        kernel /vmlinuz-2.6.32-71.el6.x86_64 ro root=UUID=e9b35e95-5634-4891-8854-e4053f3fb350 rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=uk crashkernel=auto rhgb quiet
        initrd /initramfs-2.6.32-71.el6.x86_64.img
Note how this server has two kernels installed and therefore two entries in the grub.conf file.
There is not that much more to say about this objective, really. Just play about with this file, change default kernel, menu title, timeout settings, kernel settings, but before you do make sure you have a  backup copy the of grub.conf file. e.g.:
cp /boot/grub/grub.conf grub.conf.bk

Friday 10 June 2011

Update the kernel package appropriately to ensure a bootable system

In general it is a much more interesting proposition to install a new kernel rather than updating a kernel, as installing the kernel will create a new bootable kernel and leave your current kernel alone, whereas updating it might fail and leave you without a working system. At any rate, yum is your go to guy, when it comes to package management, particularly with the kernel as, it will add actually install a new kernel.
yum update kernel
If your system is not connected to a repository, you can update the kernel by downloading the kernel rpm(s) and installing it like this:
yum update kernel-2.6.32-131.0.15.el6.x86_64.rpm  kernel-firmware-2.6.32-131.0.15.el6.noarch.rpm
You could also compile a new kernel, but that is beyond the objectives here I think.
If you don't like yum, you can use rpm to update the kernel or install a new kernel. As I mentioned above, refrain from updating unless necessary.

New kernel installed, you can now check /boot/grub/grub.conf to see that you have a new stanza:
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux Server (2.6.32-131.0.15.el6.x86_64)
        root (hd0,0)
        kernel /vmlinuz-2.6.32-131.0.15.el6.x86_64 ro root=UUID=e9b35e95-5634-4891-8854-e4053f3fb350 rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=uk crashkernel=auto rhgb quiet
        initrd /initramfs-2.6.32-131.0.15.el6.x86_64.img
title
Red Hat Enterprise Linux (2.6.32-71.el6.x86_64)
        root (hd0,0)
        kernel /vmlinuz-2.6.32-71.el6.x86_64 ro root=UUID=e9b35e95-5634-4891-8854-e4053f3fb350 rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=uk crashkernel=auto rhgb quiet
        initrd /initramfs-2.6.32-71.el6.x86_64.img
Note how the new kernel is the new default kernel. After a reboot, you can check that the new kernel is running:
 uname -r
2.6.32-131.0.15.el6.x86_64

Install and update software packages from Red Hat Network, a remote repository, or from the local filesystem

Let's start backwards with this one:

Install and update software packages from the local filesystem:
rpm -ivh mypackage.rpm
rpm -Uvh mypackage.rpm
Install and update software packages from a remote repository:

First let's configure the remote repository. You'll need to create a file with .repo extension on your /etc/yum.repos.d/ directory. A sample such is below. Note that the directory /distro is actually a mounted NFS share.
[nfs]
name=nfs
baseurl=file:///distro/
enabled=1
The baseurl field can be an http://, ftp:// or file:// URL.
It's useful to know how to import the gpg key of repository, this can be done with the following command
rpm --import gpgkey
Now you are ready to use yum, which handles dependencies as well and thus is really useful, in their simpler forms, the command are very similar to rpm.
yum install package
yum upgrade package
Install and update software packages from Red Hat Network:

This is identical to the objective above, except that you need to configure (register?) your server with RHN. You'll need to register to see the client configuration guide

Thursday 9 June 2011

Configure a system to run a default configuration FTP server

This is almost identical to this objective. Oddly enough this is pretty much the same objective as Configure anonymous-only download from the RHCE exam.

The first step is to install vsFTP daemon and the ftp client for testing purposes
yum install vsftpd ftp
 Now, you can switch it on with
service vsftpd start
Since you presumably want the ftp server to be running automatically at boot, you need to do the following:
chkconfig vsftpd on
That's it, you now have a vsFTP running and configured to start at boot.  You just need to allow traffic to it, so open the firewall for port 21 and save it:
iptables -I INPUT -p tcp --dport ftp -j ACCEPT; iptables-save > /etc/sysconfig/iptables
You can check this by using the ftp client to connect anonymously to your ftp server, e.g.
ftp 127.0.0.1
Connected to 127.0.0.1 (127.0.0.1).
220 (vsFTPd 2.2.2)
Name (127.0.0.1:root): anonymous
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls
227 Entering Passive Mode (127,0,0,1,143,26).
150 Here comes the directory listing.
drwxr-xr-x    2 0        0            4096 May 26  2010 pub
226 Directory send OK.
Make sure that the username is anonymous. You don't need to enter a password, just hit enter.

Configure a system to run a default configuration HTTP server

In its simplest interpretation, the one I'm sticking to by the way, this is a fairly simple objective.
The first step is to install Apache
yum install httpd
 Now, you can switch it on with
service httpd start
Since you presumably want the web server to be running automatically at boot, you need to do the following:
chkconfig httpd on
That's it, you now have a apache running and configured to start at boot.  You just need to allow traffic to it, so open the firewall for port 80 and save it:
iptables -I INPUT -p tcp --dport http -j ACCEPT; iptables-save > /etc/sysconfig/iptables
You can check this by using a browser to navigate to localhost, e.g.
elinks 127.0.0.1
You can now add an index.html page, no need to worry about html, just do
echo "hello" > /var/www/html/index.html
if we try elinks again:

elinks 127.0.0.1
Obviously this is just the beginning and you can have a look at the configuration file for Apache, /etc/httpd/conf/httpd.conf, which is very well commented or you could have a look at the manual.

Configure network services to start automatically at boot

This a another fairly simple objective, which is related to this objective.
The command needed to set network services to start automatically at boot is chkconfig.
If you run this command you will get a list of all network services and whether they are set to run for a particular runlevel:
abrtd           0:off   1:off   2:off   3:on    4:off   5:on    6:off
acpid           0:off   1:off   2:on    3:on    4:on    5:on    6:off
atd             0:off   1:off   2:off   3:on    4:on    5:on    6:off

.                       
.                        
.                        
xinetd          0:off   1:off   2:on    3:on    4:on    5:on    6:off
ypbind          0:off   1:off   2:off   3:off   4:off   5:off   6:off
You can also check a particular service with:
chkconfig --list servicename
 In order to set a service to start at boot for runlevels 3 & 5 you would use this:
chkconfig --level 35 servicename on
If you want to set the service to run on all runlevels, just issue this command:
chkconfig  servicename on
Note that this will not set the service to run for runlevels 0,1 and 6.

If you don't like chkconfig, there is an alternative command and with a terminal user interface as well :)
ntsyv

Configure systems to launch virtual machines at boot

I guess this one is up there with this objective in the easiest objectives category.
virsh autostart domainname
where domainname is the name of the virtual machine.

So for the vm I just created in the previous post, the command would be
virsh autostart testvm
 which has the following result:
Domain testvm marked as autostarted

Install Red Hat Enterprise Linux systems as virtual guests

You can use virt-manager to do this task, see this link, and perhaps this is the way Red Hat intended this objective to be accomplished, but as a Linux system admin you should aim to do everything via the console, therefore let's get started with the command line.
Provided that you have just installed the virtualization packages, make sure that the libvirt daemon is running and
issue the following command:
virt-install --prompt
This should prompt you for every detail needed  to create a new virtual machine.
Alternatively, you could provide the details required, like this command:
virt-install -n test -r 1024 --vcpu=1 -l nfs:10.168.20.227:/distro --os-type='linux' --os-variant='rhel6' --network network:default --file=/var/lib/libvirt/images/test.img --file-size=6 -x console=ttyS0
This will allow you to install Red Hat with a terminal user interface and will allow you to access the console of this machine from a terminal as discussed on here.


and so on, same as a normal installer but through the terminal, how cool is that :)?
Until it finishes. To exit this console, just press CTRL + ]

Configure a physical machine to host virtual guests

This is one of the easiest objectives for this exam, just issue the following commands to install the necessary packages:
yum groupinstall "Virtualization"
yum groupinstall "Virtualization Client"
There are a further two package groups related to virtualization, Virtualization Tools and Virtualization Platform, I don't think they are really needed, as they install libguestfs, which is a library for accessing and modifying guest disk images and libvirt,which is a C toolkit to interact with the virtualization capabilities of recent versions of Linux (and other OSes).

Strictly speaking the Virtualization Client group is not needed. You will need it in a server to create virtual machines and manage them, but it does not need to be installed in every server.

Install Red Hat Enterprise Linux automatically using Kickstart

Kickstart allows you to install the Red Hat automatically, this can be very useful if you have many servers or workstations that share the same configuration, e.g. an HPC cluster, a computer lab with many workstations or for virtualized environments.

By default, the Red Hat installer, anaconda, will write a kickstart file to the /root directory. This file, anaconda-ks.cfg contains the details needed to install your server, so you could consider it an kickstart image of your server.

Note that this also includes the root password, which should be encrypted by default. Assuming that you want to change this, which you should, you could either create a dummy user and look at the encrypted password in /etc/shadow or use grub-md5-crypt to generate a md5 encrypted password.

The simplest way of starting a kickstart installation does involve a manual step, and that is to indicate where the ks file is.Thus, when you have booted up and are presented with this screen:

Make sure you press tab and edit the line so that it points to your ks file, this document gives you a list of different ways of sharing your ks.cfg so that it is accessible to for installation, like this, where I have made the ks.cfg available via a web server.



This is all well and good if you have a couple of machines or probably good enough for the exam, but if you have hundreds of machines you actually want to automate the whole process. The way to do this is to customize the boot iso, so that it boots your ks.cfg file by default.

Mount the Red Hat boot iso to a directory called /iso and amend the /iso/isolinux/isolinux.cfg file, so that the source of the ks.cfg file is added to the default menu line, e.g.
append initrd=initrd.img ks=http://myserver/myksfile.cfg
Change directory to /iso and run the following command:
mkisofs -J -T -o ../ks.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -R -m TRANS.TBL .
This will create an bootable iso called ks.iso in /. You can now use this iso to automate the deployment, without the need to provide a ks file location every time.
You can of course copy your ks.cfg file to the /iso directory and just use it from there, thus negating the need for a network server to be available, in this case the line above will need to be:
append initrd=initrd.img ks=cdrom:/ks.cfg
The advantages of using a network server (web, ftp, share) is that you can change the kickstart file without the need to burn a new CD, good if you are a cheapskate like me :)

Wednesday 8 June 2011

Configure systems to boot into a specific runlevel automatically

Another short and sweet objective, all you have to do is modify the /etc/inittab file. This file contains a lot of comments and a description of the different run levels, but all you need to modify is this line:
id:3:initdefault:
If you have installed RHEL graphically, your default runlevel will be 5. At any rate, all you need to do is to change the number to your desired run level. Please make sure you understand what services run in the run level you are setting as default. You might find yourself without an expected graphical interface or worse still, unable to access the system remotely.

You can check the current runlevel with:
who -r

Schedule tasks using cron

This is a fairly important objective, at least on the surface this is something that really helps you the system administrator. At any rate, I'll refrain from rambling even though I'm in a rambling mood.

Interestingly, Anacron is now used for running daily, weekly and monthly tasks. The key advantage that anacron has over cron is that you can limit the hours the jobs run at and you can also introduce a random delay, to, for instance prevent several servers running a task against a database server all at the same time.

The /etc/crontab file has a fairly good description of the time element of a task definition, alternatively you could have a look at the man pages. In general you can schedule a task by simply placing a script in the relevant cron folder, e.g. say you want to run script myscript.sh every hour, you just need to copy it to /etc/cron.hourly.

If you had a look at /etc/cron.hourly, you might have noticed a file called 0anacron, this file actually kicks off the anacron tasks.

The hourly, daily, weekly, etc might not be granular enough for you, in which case you can edit /etc/crontab or if the user's crontab file. The latter is done with the following command, which is simply vi for the user's crontab file.
crontab -e
A few points about the time elements. In the day of the week field 0 and 7 mean Sunday and if you specify day of the week and day of the month, it will run at both specified times. If you want a task to be repeated every x number of minutes, simply have the first time element set as */x,  e.g. to run myscript.sh as user myuser every 5 minutes, the following line would be needed (assuming that you are either logged in as myuser or that you have invoked : crontab -u myuser -e to access myuser's crontab file)
*/5 * * * *  myscript.sh
You could also set intervals, e.g this will run myscript.sh every 5 minutes between 20 and 40 minutes past the hour
20-40/5 * * * *  myscript.sh
If running the jobs from /etc/crontab then you do need to specify the user, normally root I would guess.

Note the following excerpt from the manual regarding permissions:
If cron.allow file exists, then you must be listed therein in order  to  be  allowed  to  use  this  command.   If  the cron.allow file does not exist but the cron.deny file does exist, then you must not be listed in the cron.deny file in order to use this command.  If neither of  these files  exists, only the super user will be allowed to use this command.
For completeness, a user can list their cron tasks using crontab -l and delete with crontab -r or you could just visit this link, which I only found when looking for info regarding cron.allow, blast.

Always check that the tasks are running as expected by checking the log (/var/log/cron).

Tuesday 7 June 2011

Configure networking and hostname resolution statically or dynamically

Let's start with configuration hostname resolution first:

Static Name Resolution:

This refers to your /etc/hosts file, which is really only practical for a network of a few machines, as you would need to modify the hosts file on each machine so that name resolution works effectively. This is not very practical.

Dynamic Name Resolution:

This refers to the use of a name server, this is configured in the /etc/resolv.conf file, see sample below:
search dev.com sams.org
nameserver 10.168.20.1
The file /etc/host.conf controls how name resolution is configured, see default below:
multi on
order hosts,bind
The first entry means that all valid addresses on /etc/hosts will be returned for a host whereas the second entry specifies that name resolution should first attempt to look at the hosts file and then at the name server specfied in resolv.conf.

In a somewhat analogous situation to hostname resolution, ip addresses can be set either dynamically or statically.

Before we move on, let's have a look at /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=Subversion
If you need to change the hostname, this is where you need to start. The easiest way is to change the name here and reboot, as otherwise you will need to go through all places in the system where the hostname needs to be changed.

Static IP:

This is fairly simple, just edit the following file /etc/sysconfig/network-scripts/ifcfg-eth0:
DEVICE="eth0"
BOOTPROTO="static"
DNS1="10.168.20.1"
HOSTNAME="Subversion"
HWADDR="00:50:56:88:18:A4"
IPADDR="10.168.20.228"
MTU="1500"
NETMASK="255.0.0.0"
NM_CONTROLLED="yes"
ONBOOT="yes"
 The lines in bold are the lines you are interested on, nothing much to say here really.

Dynamic IP:

This is fairly simple as well, in my case I just edit the configuration file for the other nic, /etc/sysconfig/network-scripts/ifcfg-eth1:
DEVICE="eth1"
BOOTPROTO="dhcp"
HWADDR="00:50:56:88:72:2B"
ONBOOT="yes"
Needless to say that you need a dhcp server for the latter to work. The MAC Address is not necessary.

Note that you will  need to restart the interface for the configuration changes to take place, this applies to both the static and the dynamic case. You can do this with the following command:
service network restart
In practice, you really only want to restart the interface that you have changed so you should do:
ifdown eth1
ifup eth1

Diagnose and correct file permission problems

I understand what this objective tries to achieve but it is a bit tricky to prepare for it.
The following commands will be your friends:
  1. ls -l directorypath
  2. getfacl filename
The first simply lists all the files in a directory and should be your first port of call when you get a permission denied error. Like below, where I'm trying to change to the /root/ directory logged in as user noroot.
[noroot@Subversion /]$ cd /root/
bash: cd: /root/: Permission denied
If you have enabled ACLs for the file system, then the issue might be related to the acl, so you could have a look with the second command.

This is one of them objectives, where you could do with a friend who changes permissions and performs a few naughty actions on your system, e.g. stops a few services, changes SELinux contexts, etc..  and you then have to figure out how to get everything back in working order.

Configure systems to mount ext4, LUKS-encrypted and network file systems automatically

The first part of this objective, Configure systems to mount ext4 file systems automatically  has been already discussed in previous posts, here, here.The second part, Configure systems to mount  LUKS-encrypted  file systems automatically, has been covered here.

The last part of the objective Configure systems to mount network file systems automatically is related to this objective, Mount and unmount cifs and nfs network file systems, all we need to do is make the mount permanent by adding to the /etc/fstab file.

NFS
 
Add the following line to your /etc/fstab file:
10:168.20.225:/inst  /distro                 nfs     defaults 0 0
This will mount the nfs share /inst on 10.168.20.225 to a directory called /distro, with default options.

CIFS

Add the following line to your /etc/fstab file:
\\10.168.20.112\c$ /test cifs cred=/cred.cifs 0 0
This will mount the c drive of 10.168.20.112 to /test. The cred.cifs file contains the credentials needed to mount the share. This file needs to have the following format:
username=value
password=value
domain=value
Of course, you could simply pass the credentials as options instead of a credentials file. Ensure that the file  /cred.cifs is only readable by the appropriate user(s).

Find locked out domain user accounts

There were a couple of unexplained errors today in this app that is going through testing at the moment and somebody mentioned that they had been using the same accounts for accessibility testing, so there was a massive panic and I was asked to check that none of the accounts were locked out, since we are talking at over 1000 accounts, I was not prepared to go through them one by one.

I firstly thought of dsquery but alas it looks like dsquery won't do thus I then thought of using asdiedit to query the domain and bingo, after a bit of searching about I found the right query.
(&(objectCategory=person)(objectClass=user)(lockoutTime>=1))
Step by step then:
  1. Run adsiedit.msc (alternatively, run the console and add the asdiedit tab)
  2. Right Click on domain and select new | query
  3. Give the query a name, select your search root and paste the query itself
  4. Expand the domain tree and you will see your search
  5.  Now you can unlock the locked out accounts.