Increase productivity with FTP autologin and macros
Posted By LinuxTitli On February 28, 2006 @ 1:24 am In Automation, Backup, CentOS, Debian Linux, Howto, Linux, RedHat/Fedora Linux, Ubuntu Linux, programming | 1 Comment
[1]
You may use many macros under office packages. However, your ftp client also supports macros. You can use ~/.netrc - user configuration file. The .netrc file contains login and initialization information used by the auto-login process and stores macros information. It resides in the user's home directory.
Turn on FTP client auto login
You need to add username and password to file ~/.netrc. Open config file using a text editor such as vi:
$ vi ~/.netrc
Append or add following lines to it:
machine ftp.myserver.com login USERNAME password PASSWORD
Save file and exit to shell prompt. Make sure, only owner can read the file:
$ chmod 0600 ~/.netrc
To connect type command:
$ ftp ftp.myserver.com
FTP Macros
Now let us say every time you connected to ftp server you would like to switch to binary mode and turn off prompt as well as go to directory /pub/data/backup/rdbms/dump/. You can create a macro to automate all these three steps:
i) Open ~/.netrc ftp configuration file:
$ vi ~/.netrc
ii) Define a macro
You need to use the following syntax:
macdef macro-name1
command1
command2
macdef macro-name2
command1
command2
Please note that each macro definition ends with a null line (consecutive new line characters in a file or carriage returns from the terminal). There is a limit of 16 macros and 4096 total characters in all defined macros. Macros remain defined until a close command is executed.
Append following text to .netrc file:
macdef FOO
bin
prom
cd /pub/data/backup/rdbms/dump/
ls
Save and close the file. Now connect to ftp server:
$ ftp ftp.myserver.com
Output:
Connected to ftp.myserver.com
220 ftp.myserver.com NcFTPd Server (licensed copy) ready.
Remote system type is UNIX.
Using binary mode to transfer files.
To execute a macro FOO type the command:
ftp> $ FOO
Output:
bin
200 Type okay.
prom
Interactive mode off.
cd /pub/data/backup/rdbms/dump/
250 "/pub/data/backup/rdbms/dump/" is new cwd.
ftp> ls
Further readings:
=> ftp command man page
Article printed from nixCraft: http://www.cyberciti.biz/tips
URL to article: http://www.cyberciti.biz/tips/increase-productivity-with-ftp-autologin-and-macros.html
URLs in this post:
[1] Image: http://www.cyberciti.biz/tips/category/linux
Thursday, January 29, 2009
Wednesday, January 28, 2009
services-admin
GNOME’s services-admin tool lets you turn services on or off as well as specify runlevels
and the actions to take. It provides a GUI on GNOME, usually accessible by choosing System
| Administration | Services. In the Services Settings window every service displays a check box that, when checked, will cause the service to start at boot time; those unchecked will not run. To turn on a service, scroll to its entry and click the check box next to it to add a check mark. To turn off a service, click its check box to remove the check mark.
NB : tested on ubuntu 8.04
and the actions to take. It provides a GUI on GNOME, usually accessible by choosing System
| Administration | Services. In the Services Settings window every service displays a check box that, when checked, will cause the service to start at boot time; those unchecked will not run. To turn on a service, scroll to its entry and click the check box next to it to add a check mark. To turn off a service, click its check box to remove the check mark.
NB : tested on ubuntu 8.04
update-rc.d
the update-rc.d tool is a lower tool that can install or remove run level links. it is usually used when installing service packages to create default run level links.you can use it to configure your own run levels for a service.
The update-rc.d tool does not affect links that are already installed. It works only on links that are not already present in the runlevel directories. In this respect, it cannot turn a
service on or off directly as can sysv-rc-conf. To turn off a service, you would first have to
remove all runlevel links in all the rcn.d directories using the remove option and then add
in the services you want with the start or stop options. This makes turning services on
and off using the update-rc.d tool much more complicated.
You use start and stop options along with the runlevel to set the runlevels at which to start or stop a service. You will need to provide a link number for ordering the sequence in which it will be run. Enter the runlevel followed by a period. You can specify more than one runlevel. The following line will start the web server on runlevel 5. The order number used for the link name is 91. The link name will be S91apache. Be sure to include the sudo command.
sudo update-rc.d apache start 91 5 .
The stop number is always 100 minus the start number. So the stop number for a service with a start number of 91 would be 09:
sudo update-rc.d apache stop 09 6 .
The start and stop options can be combined, like so:
update-rc.d apache 99 start 5 . stop 09 6 .
A defaults option will start and stop the service at a predetermined runlevel. This option can be used to set standard start and stop links for all runlevels. Startup links will be set in runlevels 2, 3, 4, and 5. Stop entries are set in runlevels 0, 1, and 6.
update-rc.d apache defaults
The following command performs the same operation using the stop and start options:
update-rc.d apache 99 start 2 3 4 5 . stop 09 0 1 6 .
The multiuser options will start entries at 2, 3, 4 ,5 and stop them at 1:
update-rc.d apache multiuser
To remove a service you use the remove option. The links will not be removed if the
service script is still present in the init.d directory. Use the -f option to force removal of the
links without having to remove the service script. The following removes all web service
startup and shutdown entries from all runlevels:
update-rc.d –f apache remove
To turn off a service at a given runlevel that is already turned on, you would first have to remove all its runlevel links and the add in the links you want. So, to turn off the Apache
server at runlevel 3, but still have it turned on at runlevels 2, 4, and 5, you would use the
following commands:
update-rc.d –f apache remove
update-rc.d apache 99 start 2 4 5 . stop 09 0 1 3 6 .
Keep in mind that the remove option removes all stop links as well as start ones. So you have to restore the stop links for 0, 1, and 6.
TIP: On Debian and Ubuntu you can use file-rc instead of sysv-rc. The file-rc tool uses a single configuration file instead of links in separate runlevel directories.
The update-rc.d tool does not affect links that are already installed. It works only on links that are not already present in the runlevel directories. In this respect, it cannot turn a
service on or off directly as can sysv-rc-conf. To turn off a service, you would first have to
remove all runlevel links in all the rcn.d directories using the remove option and then add
in the services you want with the start or stop options. This makes turning services on
and off using the update-rc.d tool much more complicated.
You use start and stop options along with the runlevel to set the runlevels at which to start or stop a service. You will need to provide a link number for ordering the sequence in which it will be run. Enter the runlevel followed by a period. You can specify more than one runlevel. The following line will start the web server on runlevel 5. The order number used for the link name is 91. The link name will be S91apache. Be sure to include the sudo command.
sudo update-rc.d apache start 91 5 .
The stop number is always 100 minus the start number. So the stop number for a service with a start number of 91 would be 09:
sudo update-rc.d apache stop 09 6 .
The start and stop options can be combined, like so:
update-rc.d apache 99 start 5 . stop 09 6 .
A defaults option will start and stop the service at a predetermined runlevel. This option can be used to set standard start and stop links for all runlevels. Startup links will be set in runlevels 2, 3, 4, and 5. Stop entries are set in runlevels 0, 1, and 6.
update-rc.d apache defaults
The following command performs the same operation using the stop and start options:
update-rc.d apache 99 start 2 3 4 5 . stop 09 0 1 6 .
The multiuser options will start entries at 2, 3, 4 ,5 and stop them at 1:
update-rc.d apache multiuser
To remove a service you use the remove option. The links will not be removed if the
service script is still present in the init.d directory. Use the -f option to force removal of the
links without having to remove the service script. The following removes all web service
startup and shutdown entries from all runlevels:
update-rc.d –f apache remove
To turn off a service at a given runlevel that is already turned on, you would first have to remove all its runlevel links and the add in the links you want. So, to turn off the Apache
server at runlevel 3, but still have it turned on at runlevels 2, 4, and 5, you would use the
following commands:
update-rc.d –f apache remove
update-rc.d apache 99 start 2 4 5 . stop 09 0 1 3 6 .
Keep in mind that the remove option removes all stop links as well as start ones. So you have to restore the stop links for 0, 1, and 6.
TIP: On Debian and Ubuntu you can use file-rc instead of sysv-rc. The file-rc tool uses a single configuration file instead of links in separate runlevel directories.
Tuesday, January 27, 2009
Sticky bit permissions
Sticky Bit Permissions
Another special permission provides for greater security on directories. Originally, the sticky
bit was used to keep a program in memory after it finished execution to increase efficiency.
Current Linux systems ignore this feature. Instead, it is used for directories to protect files
within them. Files in a directory with the sticky bit set can be deleted or renamed only by the
root user or the owner of the directory.
Using Symbols
The sticky bit permission symbol is t. The sticky bit shows up as a t in the execute position of
the other permissions. A program with read and execute permissions with the sticky bit has
its permissions displayed as r-t.
Here’s an example:
# chmod +t /home/dylan/myreports
# ls -l /home/dylan/myreports
-rwxr-xr-t 1 root root 4096 /home/dylan/myreports
Using the Binary Method
As with ownership, for sticky bit permissions, you add another octal number to the beginning
of the octal digits. The octal digit for the sticky bit is 1 (001). The following example sets the
sticky bit for the myreports directory:
# chmod 1755 /home/dylan/myreports
The next example sets both the sticky bit and the User ID permission on the newprogs directory.
The permission 5755 has the binary equivalent of 101 111 101 101:
# chmod 5755 /usr/bin/newprogs
# ls -l /usr/bin/newprogs
drwsr-xr-t 1 root root 4096 /usr/bin/newprogs
Another special permission provides for greater security on directories. Originally, the sticky
bit was used to keep a program in memory after it finished execution to increase efficiency.
Current Linux systems ignore this feature. Instead, it is used for directories to protect files
within them. Files in a directory with the sticky bit set can be deleted or renamed only by the
root user or the owner of the directory.
Using Symbols
The sticky bit permission symbol is t. The sticky bit shows up as a t in the execute position of
the other permissions. A program with read and execute permissions with the sticky bit has
its permissions displayed as r-t.
Here’s an example:
# chmod +t /home/dylan/myreports
# ls -l /home/dylan/myreports
-rwxr-xr-t 1 root root 4096 /home/dylan/myreports
Using the Binary Method
As with ownership, for sticky bit permissions, you add another octal number to the beginning
of the octal digits. The octal digit for the sticky bit is 1 (001). The following example sets the
sticky bit for the myreports directory:
# chmod 1755 /home/dylan/myreports
The next example sets both the sticky bit and the User ID permission on the newprogs directory.
The permission 5755 has the binary equivalent of 101 111 101 101:
# chmod 5755 /usr/bin/newprogs
# ls -l /usr/bin/newprogs
drwsr-xr-t 1 root root 4096 /usr/bin/newprogs
symbolic links Vs hard Links
sulekha
AFAIK the following are the differences between symbolic links and hard links Is there any other point which I have missed ?
symbolic link
when you try to open a symbolic link which points to a file or change to one that points to a directory,the command you run acts on the file or directory that is the target of that link.
The target has its own permissions and ownership that you cannot see from the symbolic link.
The symbolic link can exist on a different disk partition than the target.
Hard link
It can only be used on files(not directories) and is basically a way of giving multiple names to the same physical file.
hard links that point to that single physical file must be on the same partition as the orginal target file.
The files are hard links if they have the same inode number.
Grant
Not always, consider NFS ;)
When using hard linked file trees, some tools (patch) know how to break hardlinks when required. Also means using an editor that is aware of hardlinks and knows to break them when modifying a file. I use hard links for linux-kernel source trees (cp -al kernel-a kernel-b), also
for backups via a cron job where only changed files are copied whilst older files are simply hard-linked, takes much less space and is far more convenient than incremental backup methods. Rsync is hardlink aware.
Andrew Halliwel
Fairly correct apart from one thing.You CAN hardlink a directory.in fact, you see those "." and ".." directories in your directory listing when you ls -a?
Them's hard links them are. Do an ls -ia on a directory and compare the inode number for ".." with the inode of the parent directory. Then compare the inode number of the "." with the inode of the directory you're in.
maxwell lol
Well, the OS can do this. The user has no control.
Older versions of Unix allowed this, but when you ran fsck, the file system would be corrupted, as I recall.
So mkdir became an atomic operation.....
Andrew Halliwel
Heh, true. I'd never tried hard linkin a directory.Just assumed it was possible cos of the dot hardlinks.
kees theunissen
And the OS does this only for the special directories "." and "..". Allowing multiple hard links, other than "." and "..", for directories would imply that directories could have multiple parent directories. This would break the concept of ".." pointing to _the_ parent directory.
billymayday
http://lwn.net/Articles/294667/
musafi
How a Hard link is created?
A hard link to a file is created using the following command
root@xyz# ln file1 file2
The above command makes file2 a hard link to a file represented by the hard link file1.
How a Soft link is created?
The soft link to a file is created using the following command
root@xyz# ln -s file1 file3
The above command makes file3 a soft link to a file represented by the hard link file1
How a file with multiple hard links is deleted?
A file with multiple hard links is only deleted after all its hard links are deleted. Then the question is, Is there any way that we can remove all of the hard links in a single command? The answer to this question is ...*
How a file with a soft link is deleted?
The file with a soft link is deleted normally by using the rm command. In this case the soft link becomes broken, i.e. not linked to anything.
What happens to the soft links if a file is moved from its original location?
In case a file is moved from its original location the soft link pointing to this file becomes broken. If the pointed file is moved back to its original location, the broken link becomes active again.
What happens to the soft links if a file is removed?
If a file is removed the soft links continue to exists but becomes broken.
What happens to the hard links a file is moved from its original location?
It makes no difference. By moving a file having multiple hard links means moving a hard link. So if a hard link is moved to some other place, it does not effect any other hard link pointing to the same file.
What happens to the hard links if a file is removed?
It just is not possible to remove a file having at least one hard link. If you remove one hard link, that particular link is deleted not the actual file. The actual file is only deleted when the last hard link pointing to the file is removed.
How can we list all of the hard links or soft links to a particular file?
List Hard Links
You can list all of the hard links to a file 'file1' by using the following command
root@xyz# find [directory to search] -samefile [file name (which can be any hard link) as argument]
Ex: root@xyz# find / -samefile file1 or root@xyz# find . -samefile file1
You can also use the inode number for searching the hard links to it
Ex: root@xyz# find . -inum [inode number]
You can find the inode number of a file by using the following command
root@xyz# ls -i
List Soft Links
You can list the soft links to a file using the following command
root@xyz# find -lname file1
It is better to put a * as prefix in the file name like below
root@xyz# find -lname "*file1"
Because we have not mentioned the directory where the search should be made so the search will be made in the current directory. We can mention the directory where the search should be made in the following way
root@xyz# find . -lname "*file1" or root@xyz# find / -lname "*file1"
see also:
http://linuxgazette.net/105/pitcher.html
AFAIK the following are the differences between symbolic links and hard links Is there any other point which I have missed ?
symbolic link
when you try to open a symbolic link which points to a file or change to one that points to a directory,the command you run acts on the file or directory that is the target of that link.
The target has its own permissions and ownership that you cannot see from the symbolic link.
The symbolic link can exist on a different disk partition than the target.
Hard link
It can only be used on files(not directories) and is basically a way of giving multiple names to the same physical file.
hard links that point to that single physical file must be on the same partition as the orginal target file.
The files are hard links if they have the same inode number.
Grant
Not always, consider NFS ;)
When using hard linked file trees, some tools (patch) know how to break hardlinks when required. Also means using an editor that is aware of hardlinks and knows to break them when modifying a file. I use hard links for linux-kernel source trees (cp -al kernel-a kernel-b), also
for backups via a cron job where only changed files are copied whilst older files are simply hard-linked, takes much less space and is far more convenient than incremental backup methods. Rsync is hardlink aware.
Andrew Halliwel
Fairly correct apart from one thing.You CAN hardlink a directory.in fact, you see those "." and ".." directories in your directory listing when you ls -a?
Them's hard links them are. Do an ls -ia on a directory and compare the inode number for ".." with the inode of the parent directory. Then compare the inode number of the "." with the inode of the directory you're in.
maxwell lol
Well, the OS can do this. The user has no control.
Older versions of Unix allowed this, but when you ran fsck, the file system would be corrupted, as I recall.
So mkdir became an atomic operation.....
Andrew Halliwel
Heh, true. I'd never tried hard linkin a directory.Just assumed it was possible cos of the dot hardlinks.
kees theunissen
And the OS does this only for the special directories "." and "..". Allowing multiple hard links, other than "." and "..", for directories would imply that directories could have multiple parent directories. This would break the concept of ".." pointing to _the_ parent directory.
billymayday
http://lwn.net/Articles/294667/
musafi
How a Hard link is created?
A hard link to a file is created using the following command
root@xyz# ln file1 file2
The above command makes file2 a hard link to a file represented by the hard link file1.
How a Soft link is created?
The soft link to a file is created using the following command
root@xyz# ln -s file1 file3
The above command makes file3 a soft link to a file represented by the hard link file1
How a file with multiple hard links is deleted?
A file with multiple hard links is only deleted after all its hard links are deleted. Then the question is, Is there any way that we can remove all of the hard links in a single command? The answer to this question is ...*
How a file with a soft link is deleted?
The file with a soft link is deleted normally by using the rm command. In this case the soft link becomes broken, i.e. not linked to anything.
What happens to the soft links if a file is moved from its original location?
In case a file is moved from its original location the soft link pointing to this file becomes broken. If the pointed file is moved back to its original location, the broken link becomes active again.
What happens to the soft links if a file is removed?
If a file is removed the soft links continue to exists but becomes broken.
What happens to the hard links a file is moved from its original location?
It makes no difference. By moving a file having multiple hard links means moving a hard link. So if a hard link is moved to some other place, it does not effect any other hard link pointing to the same file.
What happens to the hard links if a file is removed?
It just is not possible to remove a file having at least one hard link. If you remove one hard link, that particular link is deleted not the actual file. The actual file is only deleted when the last hard link pointing to the file is removed.
How can we list all of the hard links or soft links to a particular file?
List Hard Links
You can list all of the hard links to a file 'file1' by using the following command
root@xyz# find [directory to search] -samefile [file name (which can be any hard link) as argument]
Ex: root@xyz# find / -samefile file1 or root@xyz# find . -samefile file1
You can also use the inode number for searching the hard links to it
Ex: root@xyz# find . -inum [inode number]
You can find the inode number of a file by using the following command
root@xyz# ls -i
List Soft Links
You can list the soft links to a file using the following command
root@xyz# find -lname file1
It is better to put a * as prefix in the file name like below
root@xyz# find -lname "*file1"
Because we have not mentioned the directory where the search should be made so the search will be made in the current directory. We can mention the directory where the search should be made in the following way
root@xyz# find . -lname "*file1" or root@xyz# find / -lname "*file1"
see also:
http://linuxgazette.net/105/pitcher.html
Hard links
Hard Links
You can give the same file several names by using the ln command on the same file many times. To set up a hard link, you use the ln command with no -s option and two arguments: the name of the original file and the new, added filename. The ls operation lists both filenames, but only one physical file will exist.
$ ln original-filename added-filename
In the next example, the monday file is given the additional name storm. In this case,
storm is just another name for the monday file.
$ ls
today
$ ln monday storm
$ ls
monday storm
To erase a file that has hard links, you need to remove all its hard links. The name of a file is actually considered a link to that file—hence the command rm removes the link to the file. If you have several links to the file and remove only one of them, the others stay in place and you can reference the file through them. The same is true even if you remove the original link—the original name of the file. Any added links will work just as well. In the next example, the today file is removed with the rm command. However, a link to that same file exists, called weather. The file can then be referenced under the name weather.
$ ln today weather
$ rm today
$ cat weather
The storm broke today
and the sun came out.
$
You can give the same file several names by using the ln command on the same file many times. To set up a hard link, you use the ln command with no -s option and two arguments: the name of the original file and the new, added filename. The ls operation lists both filenames, but only one physical file will exist.
$ ln original-filename added-filename
In the next example, the monday file is given the additional name storm. In this case,
storm is just another name for the monday file.
$ ls
today
$ ln monday storm
$ ls
monday storm
To erase a file that has hard links, you need to remove all its hard links. The name of a file is actually considered a link to that file—hence the command rm removes the link to the file. If you have several links to the file and remove only one of them, the others stay in place and you can reference the file through them. The same is true even if you remove the original link—the original name of the file. Any added links will work just as well. In the next example, the today file is removed with the rm command. However, a link to that same file exists, called weather. The file can then be referenced under the name weather.
$ ln today weather
$ rm today
$ cat weather
The storm broke today
and the sun came out.
$
"ubuntu help and documentation"
URL Description
https://help.ubuntu.com Help pages
http://packages.ubuntu.com Ubuntu software package list and search
www.ubuntuforums.org Ubuntu forums
http://ubuntuguide.org Guide to Ubuntu
http://fridge.ubuntu.com News and developments
http://planet.ubuntu.com Member and developer blogs
http://blog.canonical.com Latest Canonical news
www.tldp.org Linux Documentation Project Web site
http://ubuntuguide.org All purpose guide to Ubuntu topics
www.ubuntugeek.com Specialized Ubuntu modifications
www.ubuntu.com/community Links to documentation, support, news, and blogs
http://lists.ubuntu.com Ubuntu mailing lists
https://help.ubuntu.com Help pages
http://packages.ubuntu.com Ubuntu software package list and search
www.ubuntuforums.org Ubuntu forums
http://ubuntuguide.org Guide to Ubuntu
http://fridge.ubuntu.com News and developments
http://planet.ubuntu.com Member and developer blogs
http://blog.canonical.com Latest Canonical news
www.tldp.org Linux Documentation Project Web site
http://ubuntuguide.org All purpose guide to Ubuntu topics
www.ubuntugeek.com Specialized Ubuntu modifications
www.ubuntu.com/community Links to documentation, support, news, and blogs
http://lists.ubuntu.com Ubuntu mailing lists
Monday, January 26, 2009
umask
Permission Defaults: umask
umask (abbreviated from user mask) is a command and a function in POSIX environments which sets the default permission modes for newly created files and directories of the current process. When a shell or other program is creating a file or directory, it specifies the permissions to be granted. The operating system then removes from those the permissions that the umask does not allow.
The umask only restricts permissions; it cannot grant extra permissions beyond what is specified by the program that creates the file or directory. When programs create files, they usually specify read and write permissions for all users, and no execute permissions at all (rw-rw-rw- or octal 666 in traditional Unix notation). Files created in this way will not be executable even if the umask would have allowed that.
On the other hand, when programs create directories, they usually specify read, write, and execute permissions for all users (rwxrwxrwx or octal 777). Directories created in this way will thus be executable unless the umask restricts that.
$ umask -S
u=rwx,g=rx,o=rx
This default umask provides rw-r--r-- permission for standard files and adds execute
permission for directories, rwxr-xr-x.
You can set a new default by specifying permissions in either symbolic or binary format.
To specify the new permissions, use the -S option. The following example denies others
read permission, while allowing user and group read access, which results in permissions of
rwxr-x---:
$ umask -S u=rwx,g=rx,o=
When you use the binary format, the mask is the inverse of the permissions you want to
set. To set both the read and execute permissions on and the write permission off, you use
the octal number 2, (binary 010). To set all permissions on, you use an octal 0 (binary 000).
The following example shows the mask for the permission defaults rwx, rx, and rx (rw, r,
and r for files):
$ umask
0022
To set the default to deny all permissions only for others, you use 0027, using the binary
mask 0111 for the other permissions:
$ umask 0027
see: http://en.wikipedia.org/wiki/Umask
umask (abbreviated from user mask) is a command and a function in POSIX environments which sets the default permission modes for newly created files and directories of the current process. When a shell or other program is creating a file or directory, it specifies the permissions to be granted. The operating system then removes from those the permissions that the umask does not allow.
The umask only restricts permissions; it cannot grant extra permissions beyond what is specified by the program that creates the file or directory. When programs create files, they usually specify read and write permissions for all users, and no execute permissions at all (rw-rw-rw- or octal 666 in traditional Unix notation). Files created in this way will not be executable even if the umask would have allowed that.
On the other hand, when programs create directories, they usually specify read, write, and execute permissions for all users (rwxrwxrwx or octal 777). Directories created in this way will thus be executable unless the umask restricts that.
$ umask -S
u=rwx,g=rx,o=rx
This default umask provides rw-r--r-- permission for standard files and adds execute
permission for directories, rwxr-xr-x.
You can set a new default by specifying permissions in either symbolic or binary format.
To specify the new permissions, use the -S option. The following example denies others
read permission, while allowing user and group read access, which results in permissions of
rwxr-x---:
$ umask -S u=rwx,g=rx,o=
When you use the binary format, the mask is the inverse of the permissions you want to
set. To set both the read and execute permissions on and the write permission off, you use
the octal number 2, (binary 010). To set all permissions on, you use an octal 0 (binary 000).
The following example shows the mask for the permission defaults rwx, rx, and rx (rw, r,
and r for files):
$ umask
0022
To set the default to deny all permissions only for others, you use 0027, using the binary
mask 0111 for the other permissions:
$ umask 0027
see: http://en.wikipedia.org/wiki/Umask
symbolic links
symbolic links function like shortcuts referencing another file. Symbolic links are much more flexible and can work over many different file systems
To set up a symbolic link, you use the ln command with the -s option and two arguments:
the name of the original file and the new, added filename. The ls operation lists both
filenames, but only one physical file will exist.
$ ln -s original-filename added-filename
In the next example, the today file is given the additional name weather. In this case,weather is another name for the today file.
$ ls
today
$ ln -s today weather
$ ls
today weather
You can give the same file several names by using the ln command on the same file many times. In the next example, the file today is assigned the names weather and weekend:
$ ln -s today weather
$ ln -s today weekend
$ ls
today weather weekend
If you list the full information about a symbolic link and its file, you will find the information displayed is different. In the next example, the user lists the full information for
both lunch and /home/george/veglist using the ls command with the -l option. The first character in the line specifies the file type. Symbolic links have their own file type, represented by an l. The file type for lunch is l, indicating it is a symbolic link, not an ordinary file. The number after the term group is the size of the file. Notice the sizes differ. The size of the lunch file is only 4 bytes. This is because lunch is only a symbolic link—a file that holds the pathname of another file—and a pathname takes up only a few bytes. It is not a direct hard link to the veglist file.
$ ls -l lunch /home/george/veglist
-rw-rw-r-- 1 george group 793 Feb 14 10:30 veglist
lrw-rw-r-- 1 chris group 4 Feb 14 10:30 lunch
To erase a file, you need to remove only its original name (and any hard links to it). If any symbolic links are left over, they will be unable to access the file. In this case, a symbolic
link will hold the pathname of a file that no longer exists.
To set up a symbolic link, you use the ln command with the -s option and two arguments:
the name of the original file and the new, added filename. The ls operation lists both
filenames, but only one physical file will exist.
$ ln -s original-filename added-filename
In the next example, the today file is given the additional name weather. In this case,weather is another name for the today file.
$ ls
today
$ ln -s today weather
$ ls
today weather
You can give the same file several names by using the ln command on the same file many times. In the next example, the file today is assigned the names weather and weekend:
$ ln -s today weather
$ ln -s today weekend
$ ls
today weather weekend
If you list the full information about a symbolic link and its file, you will find the information displayed is different. In the next example, the user lists the full information for
both lunch and /home/george/veglist using the ls command with the -l option. The first character in the line specifies the file type. Symbolic links have their own file type, represented by an l. The file type for lunch is l, indicating it is a symbolic link, not an ordinary file. The number after the term group is the size of the file. Notice the sizes differ. The size of the lunch file is only 4 bytes. This is because lunch is only a symbolic link—a file that holds the pathname of another file—and a pathname takes up only a few bytes. It is not a direct hard link to the veglist file.
$ ls -l lunch /home/george/veglist
-rw-rw-r-- 1 george group 793 Feb 14 10:30 veglist
lrw-rw-r-- 1 chris group 4 Feb 14 10:30 lunch
To erase a file, you need to remove only its original name (and any hard links to it). If any symbolic links are left over, they will be unable to access the file. In this case, a symbolic
link will hold the pathname of a file that no longer exists.
tune2fs example
You can even upgrade ext2 file systems to ext3 versions automatically, with no loss of data or change in partitions. This upgrade just adds a journal file to an ext2 file
system and enables journaling on it, using the tune2fs command. Be sure to change the ext2 file type to ext3 in any corresponding /etc/fstab entries.
The following example converts the ext2 file system on /dev/hda3 to an ext3 file system by adding a journal file (-j): tune2fs -j /dev/hda3
system and enables journaling on it, using the tune2fs command. Be sure to change the ext2 file type to ext3 in any corresponding /etc/fstab entries.
The following example converts the ext2 file system on /dev/hda3 to an ext3 file system by adding a journal file (-j): tune2fs -j /dev/hda3
usermod
usermod
The usermod command allows you to modify an existing user in the system. It works in
much the same way as useradd. Its usage is summarized here:
usage: usermod [-u uid [-o]] [-g group] [-G group,...]
[-d home [-m]] [-s shell] [-c comment] [-l new_name]
[-f inactive] [-e expire ] [-p passwd] [-L|-U] name
Every option you specify when using this command results in that particular parameter
being modified for the user. All but one of the parameters listed here are identical to the
parameters documented for the useradd command. The one exception is -l.
The -l option allows you to change the user’s login name. This and the -u option are
the only options that require special care. Before changing the user’s login or UID, you
must make sure the user is not logged into the system or running any processes. Chang-
ing this information if the user is logged in or running processes will cause unpredictable
results.
Modifying User Attributes with usermod
Now try using usermod to change the user and group IDs for a couple of accounts.
1. Use the usermod command to change the user ID (UID) of the bogususer to
600. Type [root@fedora-serverA ~]# usermod -u 600 bogususer
2. Use the id command to view your changes. Type
[root@fedora-serverA ~]# id bogususer
The output shows the new UID (600) for the user.
3. Use the usermod command to change the primary group ID (GID) of the bogus-
user account to that of the bogus group (GID = 101) and to also set an expiry date
of 12-12-2010 for the account. Type
[root@fedora-serverA ~]# usermod -g 497 -e 2010-12-12 bogususer
4. View your changes with the id command. Type
[root@fedora-serverA ~]# id bogususer
5. Use the chage command to view the new account expiration information for
the user. Type
[root@fedora-serverA ~]# chage -l bogususer
Last password change : Sep 23, 2009
Password expires : never
Password inactive : never
Account expires : Dec 12, 2010
Minimum number of days between password change : 0
Maximum number of days between password change : 99999
Number of days of warning before password expires : 7
6 to change user name for an account:- usermod -l newuser user
The usermod command allows you to modify an existing user in the system. It works in
much the same way as useradd. Its usage is summarized here:
usage: usermod [-u uid [-o]] [-g group] [-G group,...]
[-d home [-m]] [-s shell] [-c comment] [-l new_name]
[-f inactive] [-e expire ] [-p passwd] [-L|-U] name
Every option you specify when using this command results in that particular parameter
being modified for the user. All but one of the parameters listed here are identical to the
parameters documented for the useradd command. The one exception is -l.
The -l option allows you to change the user’s login name. This and the -u option are
the only options that require special care. Before changing the user’s login or UID, you
must make sure the user is not logged into the system or running any processes. Chang-
ing this information if the user is logged in or running processes will cause unpredictable
results.
Modifying User Attributes with usermod
Now try using usermod to change the user and group IDs for a couple of accounts.
1. Use the usermod command to change the user ID (UID) of the bogususer to
600. Type [root@fedora-serverA ~]# usermod -u 600 bogususer
2. Use the id command to view your changes. Type
[root@fedora-serverA ~]# id bogususer
The output shows the new UID (600) for the user.
3. Use the usermod command to change the primary group ID (GID) of the bogus-
user account to that of the bogus group (GID = 101) and to also set an expiry date
of 12-12-2010 for the account. Type
[root@fedora-serverA ~]# usermod -g 497 -e 2010-12-12 bogususer
4. View your changes with the id command. Type
[root@fedora-serverA ~]# id bogususer
5. Use the chage command to view the new account expiration information for
the user. Type
[root@fedora-serverA ~]# chage -l bogususer
Last password change : Sep 23, 2009
Password expires : never
Password inactive : never
Account expires : Dec 12, 2010
Minimum number of days between password change : 0
Maximum number of days between password change : 99999
Number of days of warning before password expires : 7
6 to change user name for an account:- usermod -l newuser user
useradd
Ex: useradd username
the useradd utility first checks the /etc/login.defs file for default values for creating a new account. for those defaults not defined in the /etc/login.defs file, useradd supplies its own. you can display these defaults using the useradd command with the -D option. values the user enters on the command line will override the corressponding defaults.
ex: useradd jeff -g introl -u 689
useradd options
the useradd utility first checks the /etc/login.defs file for default values for creating a new account. for those defaults not defined in the /etc/login.defs file, useradd supplies its own. you can display these defaults using the useradd command with the -D option. values the user enters on the command line will override the corressponding defaults.
ex: useradd jeff -g introl -u 689
useradd options
userdel command
when you want to remove a user from the system, you can use the userdel command to delete the users login. with the -r option , the users home directory will also be removed.
ex: userdel -r username
ex: userdel -r username
chage command
The chage command lets you specify an expiration limit for a users password. a user can be required to change his/her password evry month, every week or at a given date. once the password expires, the user is prompted to enter a new one.you can issue a warning beforehand, telling the user how much time is left before the password expires.
If you want to close an account, you can permanently expire a password. You can even shut down accounts that are inactive for too long. The -M option with the number of days sets the maximum time that a password can be valid. In the next example, the password for the chris account will stay valid for seven days:
chage -M 7 chris
To set a particular date for the account to expire, use the -E option with the date
specified mm/dd/yyyy:
chage -E 07/30/2008 chris
To find out the current expiration settings for a given account, use the -l option:
chage -l chris
You can also combine your options into one command, like so:
chage -M 7 -E 07/30/2008 chris
If you want to close an account, you can permanently expire a password. You can even shut down accounts that are inactive for too long. The -M option with the number of days sets the maximum time that a password can be valid. In the next example, the password for the chris account will stay valid for seven days:
chage -M 7 chris
To set a particular date for the account to expire, use the -E option with the date
specified mm/dd/yyyy:
chage -E 07/30/2008 chris
To find out the current expiration settings for a given account, use the -l option:
chage -l chris
You can also combine your options into one command, like so:
chage -M 7 -E 07/30/2008 chris
Sunday, January 25, 2009
/etc/protocols
The /etc/protocols file lists the TCP/IP protocols currently supported by your system. each entry shows the protocol number, its keyword identifier , and a brief description.
see: http://iana.org/assignments/protocol-numbers
see: http://iana.org/assignments/protocol-numbers
tcpdump simple examples
tcpdump is the elder statesman of packet sniffers. in practice, it is often the first utility you turn to when you want to get a look at traffic on your network.
packet sniffing on a given interface
ex:
tcpdump - i eth0
tcpdump -c 100 -i eth0 -w my_sniffed_ packets
tcpdump -r my _sniffed_ packets > my_packets_ text
tcpdump | grep -v ssh
packet sniffing on a given interface
ex:
tcpdump - i eth0
tcpdump -c 100 -i eth0 -w my_sniffed_ packets
tcpdump -r my _sniffed_ packets > my_packets_ text
tcpdump | grep -v ssh
nmap
to probe a single target, specify the host name or address
nmap -v target.example.com
nmap -v 10.12.104.200
nmap -v 122.166.23.91,20-25
by default , nmap uses both TCP and ICMP pings for host discovery. if these are blocked by an intervening firewall, the nmap -P options provide alternate ping strategies. if you know that your targets are up , you can disable host discovery with the -P0 option.
run nmap as root if possible. some of its more advanced tests intentionally violate IP protocols., and require raw sockets that only the super user is allowed to access.
use -F option to quickly scan only the well known ports, or the -p option to select different, specific, numeric range of ports. if you want to exhaustively scan all ports, use -p 0 - 65535
disable port scanning entirely with nmap -sP option.
nmap -O enables the operating system fingerprinting.
nmap -v target.example.com
nmap -v 10.12.104.200
nmap -v 122.166.23.91,20-25
by default , nmap uses both TCP and ICMP pings for host discovery. if these are blocked by an intervening firewall, the nmap -P options provide alternate ping strategies. if you know that your targets are up , you can disable host discovery with the -P0 option.
run nmap as root if possible. some of its more advanced tests intentionally violate IP protocols., and require raw sockets that only the super user is allowed to access.
use -F option to quickly scan only the well known ports, or the -p option to select different, specific, numeric range of ports. if you want to exhaustively scan all ports, use -p 0 - 65535
disable port scanning entirely with nmap -sP option.
nmap -O enables the operating system fingerprinting.
Saturday, January 24, 2009
DROP Vs REJECT
DROP
1) drop or deny simply swallows the packet never to be seen again, and emits no response.
2) A DROP policy makes it appear to peers that your host is turned off or temporarily unreachable due to network problems.
3) attempts to connect to TCP services will take a long time to fail,as clients will receive no explicit rejection message
REJECT
1) responds to the packet with a friendly message back to the sender,something like hello i have rejected your packet
2) can leave you open to D.O.S attacks
source: linux security cook book
1) drop or deny simply swallows the packet never to be seen again, and emits no response.
2) A DROP policy makes it appear to peers that your host is turned off or temporarily unreachable due to network problems.
3) attempts to connect to TCP services will take a long time to fail,as clients will receive no explicit rejection message
REJECT
1) responds to the packet with a friendly message back to the sender,something like hello i have rejected your packet
2) can leave you open to D.O.S attacks
source: linux security cook book
Friday, January 23, 2009
tcpwrapper
TCPWrapper is a program which when integrated with your host OS mechanism for accepting connections from remote users, allows administrators to uniformly enforce greater logging and access control than many network services are able to support. TCPwrapper can tell you who is connecting when, from where, and to which services, while allowing you to selectively accept or deny connections at an early opportunity. It can also trigger external commands when a particular connection criteria is met. This gives the TCPwrappers a lot of potential.
The general purpose of tcpwrapper is to monitor and filter incoming requests for SYSTAT,FINGER,FTP,TELNET,RLOGIN,RSH,
EXEC,TFTP,TALK and other network services
Limited to TCP packets.
not applicable to UDP or ICMP
/etc/hosts.allow and /etc/hosts.deny
/etc/hosts.allow read first
limits on a service
use daemons in /usr/sbin
can specify by domain or IP



The general purpose of tcpwrapper is to monitor and filter incoming requests for SYSTAT,FINGER,FTP,TELNET,RLOGIN,RSH,
EXEC,TFTP,TALK and other network services
Limited to TCP packets.
not applicable to UDP or ICMP
/etc/hosts.allow and /etc/hosts.deny
/etc/hosts.allow read first
limits on a service
use daemons in /usr/sbin
can specify by domain or IP



iptable examples
Anatomy of IP tables command
Table
Table specifies the name of the table the command operates on : Filter, NAT or Mangle. you can specify a table name in any iptables command.
when you do not specify a table name , the command operates on the filter table. specify a table as -t tablename or --table tablename
Command
what to do with the rest of the command line.
ex:- add or delete a rule, display rules, or add a chain.
Ex:-
-A or --append
-D or --delete
-I or --insert
-R or --replace (iptables -R chain-rule-number rule-specification --jump target)
-L or --list (iptables -L [chain] display-criteria)
-F or --flush (Deletes all rules from chain, Omit chain to delete all rules from all chains, ex:- iptables -F [chain] )
-Z or --zero (changes to zero the value of all packet and byte counters in chain or in all chains when you do not specify chain.Use with -L to display the counters before clearing them. iptables -Z [-L] [chain])
-X or --delete-chain (Removes the user defined chain named chain. if you do not specify chain, removes all the user defined chains. you cannot delete a chain that target points to. iptables -X chain)
CHAINS
A chain is simply a list of rules that act on a packet flowing through the system.specifies the name of the chain that this rule belongs to or that this command works on. The chain is INPUT, OUTPUT,FORWARD,PREROUTING, POSTROUTING , or the name of the user defined chain.
FORWARD
The FORWARD chain is invoked only in the case when IP forwarding is enabled and the packet is destined for a system other than the host itself. For example, if the Linux system has the IP address 172.16.1.1 and is configured to route packets between the Internet and the 172.16.1.0/24 network, and a packet from 1.1.1.1 is destined to 172.16.1.10, the packet will traverse the FORWARD chain.
INPUT
The INPUT chain is invoked only when a packet is destined for the host itself. The rules that are run against a packet are done before the packet goes up the stack and arrives at the application
OUTPUT
The OUTPUT chain is invoked when packets are sent from applications running on the host itself. For example, if an administrator on the command-line interface (CLI) tries to use SSH to connect to a remote system, the OUTPUT chain will see the first packet of the connection
Defining the Rule-Specification
p [!] protocol This specifies the IP protocol to compare against. You can use any protocol defined in the /etc/protocols file, such as “tcp,” “udp,” or “icmp.” A built-in value for “all” indicates that all IP packets will match. If the protocol is not defined in /etc/protocols, you can use the protocol number here. For example, 47 represents “gre.” The exclamation mark (!) negates the check. Thus, specifying -p ! tcp means all packets that are not TCP. If this option is not provided, Netfilter will assume “all.” The --protocol option is an alias for this option. An example of its usage is
[root@serverA ~]# iptables -t filter -A INPUT -p tcp --dport 80 -j ACCEPT
For ip6tables, use
[root@serverA ~]# ip6tables -t filter -A INPUT -p tcp --dport 80 -j ACCEPT
These rules will accept all packets destined to TCP port 80 on the INPUT chain.
d [!] address [/mask] This option specifies the destination IP address to check against. When combined with an optional netmask, the destination IP can be compared against an entire netblock. As with -s, the exclamation mark negates the rule, and the address and netmask can be abbreviated.
MATCH CRITERIA
they packet match criteria/rule specifications
i interface This option specifies the name of the interface on which a packet was received. This is handy for instances where special rules should be applied if a packet arrives from a physical location, such as a DMZ interface. For example, if eth1 is your DMZ interface and you want to allow it to send packets to the host at 10.4.3.2, you can use [root@serverA ~]# iptables -A FORWARD -i eth1 -d 10.4.3.2 -j ACCEPT
o interface This option specifies the name of the interface on which a packet will leave the system. For example,
[root@serverA ~]# iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT
In this example, any packets coming in from eth0 and going out to eth1 are accepted.
[!] -f This option specifies whether a packet is an IP fragment or not. The exclamation mark negates this rule. For example,
[root@serverA ~]# iptables -A INPUT -f -j DROP
In this example, any IP fragments coming in on the INPUT chain are automatically dropped. The same rule with negative logic would be
[root@serverA ~]# iptables -A INPUT ! -f -j ACCEPT
c PKTS BYTES This option allows you to set the counter values for a particular rule when inserting, appending, or replacing a rule on a chain. The
counters correspond to the number of packets and bytes that have traversed the rule, respectively. For most administrators, this is a rare need. An example of its usage is
[root@serverA ~]# iptables -I FORWARD -f -j ACCEPT -c 10 10
In this example, a new rule allowing packet fragments is inserted into the FORWARD chain, and the packet counters are set to 10 packets and 10 bytes.
v This option will display any output of iptables (usually combined with
the -L option) to show additional data. For example,
[root@serverA ~]# iptables -L –v
n This option will display any hostnames or port names in their numeric form. Normally, iptables will do Domain Name System (DNS) resolution for you and show hostnames instead of IP addresses and protocol names (like SMTP) instead of port numbers (25). If your DNS system is down, or if you do not want to generate any additional packets, this is a useful option. An example of this is
[root@serverA ~]# iptables -L -n
x This option will show the exact values of a counter. Normally, iptables will try to print values in “human-friendly” terms and thus perform rounding in the process. For example, instead of showing “10310,” iptables will show “10k.”
An example of this is [root@serverA ~]# iptables -L -x
line-numbers This option will display the line numbers next to each rule in a chain. This is useful when you need to insert a rule in the middle of a chain and need a quick list of the rules and their corresponding rule numbers.
An example of this is
[root@serverA ~]# iptables -L –line-numbers
For IPv6 firewall rules, use
[root@serverA ~]# ip6tables -L --line-numbers
-----------------------------------------------------------------------------------------------
Rule-Spec Extensions with Match
limit
This module provides a method of limiting the packet rate. It will match so long as the rate of packets is under the limit. A secondary “burst” option matches against a momentary spike in traffic, but will stop matching if the spike sustains. The two parameters are
▼ limit rate
▲ limit-burst number
The rate is the sustained packet-per-second count. The number in the second parameter specifies how many back-to-back packets to accept in a spike. The default value for number is 5. You can use this feature as a simple approach to slowing down a SYN flood:
[root@serverA ~]# iptables -N syn-flood
[root@serverA ~]# iptables -A INPUT -p tcp --syn -j syn-flood
[root@serverA ~]# iptables -A syn-flood -m limit --limit 1/s -j RETURN
[root@serverA ~]# iptables -A syn-flood -j DROP
This will limit the connection rate to an average of one per second, with a burst up to
five connections. This isn’t perfect, and a SYN flood can still deny legitimate users with
this method; however, it will help keep your server from spiraling out of control.
..................................................................................
JUMPS
A jump transfers control to a different chain within the same table.
ex:- sudo iptables --append INPUT --protocol tcp --jump tcp_rules
TARGETS
j target This option specifies an action to “jump” to. These actions are referred to as targets in iptables parlance. The targets that we’ve seen so far have been ACCEPT, DROP, and RETURN. The first two accept and drop packets, respectively. The third is related to the creation of additional chains.
As we saw in the preceding section, it is possible for you to create your own chains to help keep things organized and to accommodate more complex rules. If iptables is evaluating a set of rules in a chain that is not built-in, the RETURN target will tell iptables to return back to the parent chain. If iptables sees the RETURN action in one of the built-in chains, it will execute the default rule for the chain.
Additional targets can be loaded via Netfilter modules. For example, the REJECT target can be loaded with ipt_REJECT, which will drop the packet and return an ICMP error packet back to the sender. Another useful target is ipt_REDIRECT, which can make a packet be destined to the NAT host itself even if the packet is destined for somewhere else.
------------------------------------------------------------------------------------------------------------------------------------
EXPLICIT MATCH EXTENSIONS
STATE
the state extension matches criteria based on the state of the connection the packet is part of
--state state:-
matches a packet whose state is defined by state , a comma separated list of states form the following LIST.
ESTABLISHED :-
any packet, within a specific connection, following the exchange of packets in both directions for that connection.
INVALID :-
a stateless or unidentifiable packet.
NEW :-
the first packet within a connection, typically a SYN packet.
RELATED :-
any packets exchanged in a connection spawned from an ESTABLISHED connection.ex:- an FTP data connection might be related to the FTP control connection.
state
This module allows you to determine the state of a TCP connection through the eyes of the conntrack module. It provides one additional option:
state state here state is INVALID, ESTABLISHED, NEW, or RELATED. A state is INVALID if the packet in question cannot be associated to an existing flow. If the packet is part of an existing connection, the state is ESTABLISHED. If the packet is starting a new flow, it is
considered NEW. Finally, if a packet is associated with an existing connection (e.g., an FTP data transfer), then it is RELATED.
Using this feature to make sure that new connections have only the TCP SYN bit set,we do the following:
[root@serverA ~]# iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP
Reading this example, we see that for a packet on the INPUT chain that is TCP, that does not have the SYN flag set, and the state of a connection is NEW, we drop the packet. (Recall that legitimate new TCP connections must start with a packet that has the SYN bit set.)
tcp
This module allows us to examine multiple aspects of TCP packets. We have seen
some of these options (like --syn) already. Here is a complete list of options:
▼ source-port [!] port: [port] This option examines the source port of
a TCP packet. If a colon followed by a second port number is specified, a range
of ports is checked. For example, “6000:6010” means “all ports between 6000
and 6010, inclusive.” The exclamation mark negates this setting. For example,
--source-port ! 25 means “all source ports that are not 25.” An alias for
this option is --sport.
■ destination-port [!] port: [port] Like the --source-port option,
this examines the destination port of a TCP packet. Port ranges and negation
are supported. For example, -destination-port ! 9000:9010 means “all
ports that are not between 9000 and 9010, inclusive.” An alias for this option is
--dport.
■ tcp-flags [!] mask comp This checks the TCP flags that are set in a packet.
The mask tells the option what flags to check, and the comp parameter tells the
option what flags must be set. Both mask and comp can be a comma-separated
list of flags. Valid flags are SYN, ACK, FIN, RST, URG, PSH, ALL, and NONE,
where ALL means all flags and NONE means none of the flags. The exclamation
mark negates the setting. For example, to use --tcp-flags ALL SYN,ACK
means that the option should check all flags and only the SYN and ACK flags
must be set.
▲ [!] --syn This checks if the SYN flag is enabled. It is logically equivalent
to --tcp-flags SYN,RST,ACK SYN. The exclamation point negates the
setting.
An example using this module checks if a connection to DNS port 53 originates from port 53, does not have the SYN bit set, and has the URG bit set, in which case it should be dropped. Note that DNS will automatically switch to TCP when a request is greater than 512 bytes.
[root@serverA ~]# iptables -A INPUT -p tcp --sport 53 --dport 53 --tcp-flags ! SYN URG -j DROP
tcpmss
This matches a TCP packet with a specific Maximum Segment Size (MSS). The lowest legal limit for IP is 576, and the highest value is 1500. The goal in setting an MSS value for a connection is to avoid packet segmentation between two endpoints. Dial-up connections tend to use 576-byte MSS settings, whereas users coming from high-speed links tend to use 1500-byte values. The command-line option for this setting is
mss value:[value]
where value is the MSS value to compare against. If a colon followed by a second value
is provided, an entire range is checked. For example,
[root@serverA ~]# iptables -I INPUT -p tcp -m tcpmss --mss 576 -j ACCEPT
[root@serverA ~]# iptables -I INPUT -p tcp -m tcpmss ! --mss 576 -j ACCEPT
This will provide a simple way of counting how many packets (and how many bytes)
are coming from connections that have a 576-byte MSS and how many are not. To see the
status of the counters, use iptables -L -v.
udp
Like the TCP module, the UDP module provides extra parameters to check for a
packet. Two additional parameters are provided:
▼ source-port [!] port:[port] This option checks the source port of
a User Datagram Protocol (UDP) packet. If the port number is followed by a
colon and another number, the range between the two numbers is checked. If the
exclamation point is used, the logic is inverted.
▲ Like the source-port option,
destination-port [!] port:[port]
this option checks the UDP destination port.
For example:
[root@serverA ~]# iptables -I INPUT -p udp --destination-port 53 -j ACCEPT
This example will accept all UDP packets destined for port 53. This rule is typi-
cally set to allow traffic to DNS servers.
------------------------------------------------------------------------------------------------------------------------------------
Examples:-
To list iptable rules:-
iptables -L
To replace rule number 3 in the INPUT chain with a rule that rejects all packets from the IP address 192.168.0.10
iptables -R INPUT 3 --source 192.168.0.10 --jump REJECT
Resetting iptables:-
sudo iptables --flush && iptables --delete-chain
To delete all rules from the NAT table:-
sudo iptables -t NAT -F
To reject packets coming from the FTP port:-
sudo iptables --append FORWARD --sport ftp --jump REJECT
To delete a rule :-
sudo iptables --delete -A FORWARD -i eth1 -o eth0 -j ACCEPT
To load state extension and establishes a rule that matches and drops both invalid packets and packets for new connections:
sudo iptables --match state --state INVALID,NEW --jump DROP
To log packets from the internet that attempts to create a new connection
sudo iptables -A FORWARD -j LOG
To limit the local systems that can connect to the internet
sudo iptables -t NAT -A POSTROUTING -o eth0 -s 192.168.0.0-192.168.0.32 -j MASQERADE
iptables is being configured to allow the firewall to accept TCP packets for routing when they enter on interface eth0 from any IP address and are destined for an IP address of 192.168.1.58 that is reachable via interface eth1. The source port is in the range 1024 to 65535 and the destination port is port 80
iptables -A FORWARD -s 0/0 -i eth0 -d 192.168.1.58 -o eth1 -p TCP --sport 1024:65535 --dport 80 -j ACCEPT
to accept all packets except those from the Ip address 192.168.0.45
iptables -A INPUT -j ACCEPT ! -s 192.168.0.45
The following example accepts messages coming in that are from (source) any host in the
192.168.0.0 network and that are going (destination) anywhere at all (the -d option is left
out or could be written as -d 0/0):
iptables -A INPUT -s 192.168.0.0/24 -j ACCEPT
allow responses from connections we have initiated if for instance we use a web browser to visit a website we want to allow the website that we ask for to come through the web site
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
smurf attack,to prevent flooding a network with ping messages only allow 1 ping request/second
iptables -A INPUT -p icmp -m icmp -m limit -limit 1/second -j ACCEPT
Anti spoofing:-drop packets that claim to be from loopback IP but coming on the physical network interface
iptables -A INPUT --in-interface !lo --source 127.0.0.0/8 -j DROP
to allow local access to printers and to disallow internet access to printers
iptables -A services -m iprange --src-range 192.168.1.1-192.168.1.254 -p tcp --dport 631 -j ACCEPT
To see what services we have running on our machine and what ports they are using
netstat --inet -pln
Ex:-
This rule will drop all packets from the 172.16.0.0/16 network.
iptables -t filter -A INPUT -s 172.16/16 -j DROP
Ex:-
To use ip6tables to drop all packets from the IPv6 network range 2001:DB8::/32,
we would use a rule like:
[root@serverA ~]# ip6tables -t filter -A INPUT -s 2001:DB8::/32 -j DROP
Ex:-
This rule will allow all packets going through the FORWARD chain that are destined for the 10.100.93.0/24 network.
iptables -t filter -A FORWARD -d 10.100.93.0/24 -j ACCEPT
to disable ping on loopback interface
iptables -A input -s 127.0.0.1 -p icmp -j DROP
to drop the above rule
iptables -D input -s 127.0.0.1 -p icmp -j DROP
specifying rule number so deletion can work
iptables -D INPUT 1
letting a specific remote network address access a local webserver
iptables -A INPUT -s ! example.com -d 64.41.64.124 -p TCP -sport 80 -j DROP
blocking the gateway/firewall machine
iptables -A input -s 66.98.216/24 -d 64.41.64/24 -j DROP
Adding user defined chain
#create a chain to block new connections , except established locally
iptables -N block
iptables -A block -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A block -m state --state NEW -i ! ppp0 -j ACCEPT
iptables -A block -j DROP # DROP everything else not accepted
# jump to that chain from INPUT and FORWARD chains
iptables -A INPUT -j block
iptables -A FORWARD -j block
Enabling the masquerade
modprobe iptables_nat # load the kernel module
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
.........................................................................................
sudo iptables -t NAT -A POSTROUTING -o eth0 -j MASQUERADE
sudo iptables -t NAT -A PREROUTING -p tcp -d 66.187.232.50 -j DNAT --to-destination 192.168.0.10
sudo iptables -A FORWARD -i eth0 -o eth1 -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -s www.xyz.com -j DROP
iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
iptables -A OUTPUT -d 72.14.221.85 -j REJECT
iptables -A OUTPUT -d 72.14.209.86 -j REJECT
iptables -A OUTPUT -d 72.14.209.87 -j REJECT
iptables -A OUTPUT -d 64.233.161.85 -j REJECT
iptables -A OUTPUT -d 209.85.129.85 -j REJECT
iptables -A OUTPUT -d 209.85.141.85 -j REJECT
iptables -A OUTPUT -d 216.187.118.219 -j REJECT
iptables -A OUTPUT -d 209.85.143.189 -j REJECT
iptables -A OUTPUT -d 202.138.103.100 -j REJECT
copying rules to and from the kernel
The iptables-save utiliy copies packet filtering rules from the kernel to standard output so you can save them in a file. the iptables-restore utility copies rules from standard input, as written by iptables-save, to the kernel.
Table
Table specifies the name of the table the command operates on : Filter, NAT or Mangle. you can specify a table name in any iptables command.
when you do not specify a table name , the command operates on the filter table. specify a table as -t tablename or --table tablename
Command
what to do with the rest of the command line.
ex:- add or delete a rule, display rules, or add a chain.
Ex:-
-A or --append
-D or --delete
-I or --insert
-R or --replace (iptables -R chain-rule-number rule-specification --jump target)
-L or --list (iptables -L [chain] display-criteria)
-F or --flush (Deletes all rules from chain, Omit chain to delete all rules from all chains, ex:- iptables -F [chain] )
-Z or --zero (changes to zero the value of all packet and byte counters in chain or in all chains when you do not specify chain.Use with -L to display the counters before clearing them. iptables -Z [-L] [chain])
-X or --delete-chain (Removes the user defined chain named chain. if you do not specify chain, removes all the user defined chains. you cannot delete a chain that target points to. iptables -X chain)
CHAINS
A chain is simply a list of rules that act on a packet flowing through the system.specifies the name of the chain that this rule belongs to or that this command works on. The chain is INPUT, OUTPUT,FORWARD,PREROUTING, POSTROUTING , or the name of the user defined chain.
FORWARD
The FORWARD chain is invoked only in the case when IP forwarding is enabled and the packet is destined for a system other than the host itself. For example, if the Linux system has the IP address 172.16.1.1 and is configured to route packets between the Internet and the 172.16.1.0/24 network, and a packet from 1.1.1.1 is destined to 172.16.1.10, the packet will traverse the FORWARD chain.
INPUT
The INPUT chain is invoked only when a packet is destined for the host itself. The rules that are run against a packet are done before the packet goes up the stack and arrives at the application
OUTPUT
The OUTPUT chain is invoked when packets are sent from applications running on the host itself. For example, if an administrator on the command-line interface (CLI) tries to use SSH to connect to a remote system, the OUTPUT chain will see the first packet of the connection
Defining the Rule-Specification
p [!] protocol This specifies the IP protocol to compare against. You can use any protocol defined in the /etc/protocols file, such as “tcp,” “udp,” or “icmp.” A built-in value for “all” indicates that all IP packets will match. If the protocol is not defined in /etc/protocols, you can use the protocol number here. For example, 47 represents “gre.” The exclamation mark (!) negates the check. Thus, specifying -p ! tcp means all packets that are not TCP. If this option is not provided, Netfilter will assume “all.” The --protocol option is an alias for this option. An example of its usage is
[root@serverA ~]# iptables -t filter -A INPUT -p tcp --dport 80 -j ACCEPT
For ip6tables, use
[root@serverA ~]# ip6tables -t filter -A INPUT -p tcp --dport 80 -j ACCEPT
These rules will accept all packets destined to TCP port 80 on the INPUT chain.
d [!] address [/mask] This option specifies the destination IP address to check against. When combined with an optional netmask, the destination IP can be compared against an entire netblock. As with -s, the exclamation mark negates the rule, and the address and netmask can be abbreviated.
MATCH CRITERIA
they packet match criteria/rule specifications
i interface This option specifies the name of the interface on which a packet was received. This is handy for instances where special rules should be applied if a packet arrives from a physical location, such as a DMZ interface. For example, if eth1 is your DMZ interface and you want to allow it to send packets to the host at 10.4.3.2, you can use [root@serverA ~]# iptables -A FORWARD -i eth1 -d 10.4.3.2 -j ACCEPT
o interface This option specifies the name of the interface on which a packet will leave the system. For example,
[root@serverA ~]# iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT
In this example, any packets coming in from eth0 and going out to eth1 are accepted.
[!] -f This option specifies whether a packet is an IP fragment or not. The exclamation mark negates this rule. For example,
[root@serverA ~]# iptables -A INPUT -f -j DROP
In this example, any IP fragments coming in on the INPUT chain are automatically dropped. The same rule with negative logic would be
[root@serverA ~]# iptables -A INPUT ! -f -j ACCEPT
c PKTS BYTES This option allows you to set the counter values for a particular rule when inserting, appending, or replacing a rule on a chain. The
counters correspond to the number of packets and bytes that have traversed the rule, respectively. For most administrators, this is a rare need. An example of its usage is
[root@serverA ~]# iptables -I FORWARD -f -j ACCEPT -c 10 10
In this example, a new rule allowing packet fragments is inserted into the FORWARD chain, and the packet counters are set to 10 packets and 10 bytes.
v This option will display any output of iptables (usually combined with
the -L option) to show additional data. For example,
[root@serverA ~]# iptables -L –v
n This option will display any hostnames or port names in their numeric form. Normally, iptables will do Domain Name System (DNS) resolution for you and show hostnames instead of IP addresses and protocol names (like SMTP) instead of port numbers (25). If your DNS system is down, or if you do not want to generate any additional packets, this is a useful option. An example of this is
[root@serverA ~]# iptables -L -n
x This option will show the exact values of a counter. Normally, iptables will try to print values in “human-friendly” terms and thus perform rounding in the process. For example, instead of showing “10310,” iptables will show “10k.”
An example of this is [root@serverA ~]# iptables -L -x
line-numbers This option will display the line numbers next to each rule in a chain. This is useful when you need to insert a rule in the middle of a chain and need a quick list of the rules and their corresponding rule numbers.
An example of this is
[root@serverA ~]# iptables -L –line-numbers
For IPv6 firewall rules, use
[root@serverA ~]# ip6tables -L --line-numbers
-----------------------------------------------------------------------------------------------
Rule-Spec Extensions with Match
limit
This module provides a method of limiting the packet rate. It will match so long as the rate of packets is under the limit. A secondary “burst” option matches against a momentary spike in traffic, but will stop matching if the spike sustains. The two parameters are
▼ limit rate
▲ limit-burst number
The rate is the sustained packet-per-second count. The number in the second parameter specifies how many back-to-back packets to accept in a spike. The default value for number is 5. You can use this feature as a simple approach to slowing down a SYN flood:
[root@serverA ~]# iptables -N syn-flood
[root@serverA ~]# iptables -A INPUT -p tcp --syn -j syn-flood
[root@serverA ~]# iptables -A syn-flood -m limit --limit 1/s -j RETURN
[root@serverA ~]# iptables -A syn-flood -j DROP
This will limit the connection rate to an average of one per second, with a burst up to
five connections. This isn’t perfect, and a SYN flood can still deny legitimate users with
this method; however, it will help keep your server from spiraling out of control.
..................................................................................
JUMPS
A jump transfers control to a different chain within the same table.
ex:- sudo iptables --append INPUT --protocol tcp --jump tcp_rules
TARGETS
j target This option specifies an action to “jump” to. These actions are referred to as targets in iptables parlance. The targets that we’ve seen so far have been ACCEPT, DROP, and RETURN. The first two accept and drop packets, respectively. The third is related to the creation of additional chains.
As we saw in the preceding section, it is possible for you to create your own chains to help keep things organized and to accommodate more complex rules. If iptables is evaluating a set of rules in a chain that is not built-in, the RETURN target will tell iptables to return back to the parent chain. If iptables sees the RETURN action in one of the built-in chains, it will execute the default rule for the chain.
Additional targets can be loaded via Netfilter modules. For example, the REJECT target can be loaded with ipt_REJECT, which will drop the packet and return an ICMP error packet back to the sender. Another useful target is ipt_REDIRECT, which can make a packet be destined to the NAT host itself even if the packet is destined for somewhere else.
------------------------------------------------------------------------------------------------------------------------------------
EXPLICIT MATCH EXTENSIONS
STATE
the state extension matches criteria based on the state of the connection the packet is part of
--state state:-
matches a packet whose state is defined by state , a comma separated list of states form the following LIST.
ESTABLISHED :-
any packet, within a specific connection, following the exchange of packets in both directions for that connection.
INVALID :-
a stateless or unidentifiable packet.
NEW :-
the first packet within a connection, typically a SYN packet.
RELATED :-
any packets exchanged in a connection spawned from an ESTABLISHED connection.ex:- an FTP data connection might be related to the FTP control connection.
state
This module allows you to determine the state of a TCP connection through the eyes of the conntrack module. It provides one additional option:
state state here state is INVALID, ESTABLISHED, NEW, or RELATED. A state is INVALID if the packet in question cannot be associated to an existing flow. If the packet is part of an existing connection, the state is ESTABLISHED. If the packet is starting a new flow, it is
considered NEW. Finally, if a packet is associated with an existing connection (e.g., an FTP data transfer), then it is RELATED.
Using this feature to make sure that new connections have only the TCP SYN bit set,we do the following:
[root@serverA ~]# iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP
Reading this example, we see that for a packet on the INPUT chain that is TCP, that does not have the SYN flag set, and the state of a connection is NEW, we drop the packet. (Recall that legitimate new TCP connections must start with a packet that has the SYN bit set.)
tcp
This module allows us to examine multiple aspects of TCP packets. We have seen
some of these options (like --syn) already. Here is a complete list of options:
▼ source-port [!] port: [port] This option examines the source port of
a TCP packet. If a colon followed by a second port number is specified, a range
of ports is checked. For example, “6000:6010” means “all ports between 6000
and 6010, inclusive.” The exclamation mark negates this setting. For example,
--source-port ! 25 means “all source ports that are not 25.” An alias for
this option is --sport.
■ destination-port [!] port: [port] Like the --source-port option,
this examines the destination port of a TCP packet. Port ranges and negation
are supported. For example, -destination-port ! 9000:9010 means “all
ports that are not between 9000 and 9010, inclusive.” An alias for this option is
--dport.
■ tcp-flags [!] mask comp This checks the TCP flags that are set in a packet.
The mask tells the option what flags to check, and the comp parameter tells the
option what flags must be set. Both mask and comp can be a comma-separated
list of flags. Valid flags are SYN, ACK, FIN, RST, URG, PSH, ALL, and NONE,
where ALL means all flags and NONE means none of the flags. The exclamation
mark negates the setting. For example, to use --tcp-flags ALL SYN,ACK
means that the option should check all flags and only the SYN and ACK flags
must be set.
▲ [!] --syn This checks if the SYN flag is enabled. It is logically equivalent
to --tcp-flags SYN,RST,ACK SYN. The exclamation point negates the
setting.
An example using this module checks if a connection to DNS port 53 originates from port 53, does not have the SYN bit set, and has the URG bit set, in which case it should be dropped. Note that DNS will automatically switch to TCP when a request is greater than 512 bytes.
[root@serverA ~]# iptables -A INPUT -p tcp --sport 53 --dport 53 --tcp-flags ! SYN URG -j DROP
tcpmss
This matches a TCP packet with a specific Maximum Segment Size (MSS). The lowest legal limit for IP is 576, and the highest value is 1500. The goal in setting an MSS value for a connection is to avoid packet segmentation between two endpoints. Dial-up connections tend to use 576-byte MSS settings, whereas users coming from high-speed links tend to use 1500-byte values. The command-line option for this setting is
mss value:[value]
where value is the MSS value to compare against. If a colon followed by a second value
is provided, an entire range is checked. For example,
[root@serverA ~]# iptables -I INPUT -p tcp -m tcpmss --mss 576 -j ACCEPT
[root@serverA ~]# iptables -I INPUT -p tcp -m tcpmss ! --mss 576 -j ACCEPT
This will provide a simple way of counting how many packets (and how many bytes)
are coming from connections that have a 576-byte MSS and how many are not. To see the
status of the counters, use iptables -L -v.
udp
Like the TCP module, the UDP module provides extra parameters to check for a
packet. Two additional parameters are provided:
▼ source-port [!] port:[port] This option checks the source port of
a User Datagram Protocol (UDP) packet. If the port number is followed by a
colon and another number, the range between the two numbers is checked. If the
exclamation point is used, the logic is inverted.
▲ Like the source-port option,
destination-port [!] port:[port]
this option checks the UDP destination port.
For example:
[root@serverA ~]# iptables -I INPUT -p udp --destination-port 53 -j ACCEPT
This example will accept all UDP packets destined for port 53. This rule is typi-
cally set to allow traffic to DNS servers.
------------------------------------------------------------------------------------------------------------------------------------
Examples:-
To list iptable rules:-
iptables -L
To replace rule number 3 in the INPUT chain with a rule that rejects all packets from the IP address 192.168.0.10
iptables -R INPUT 3 --source 192.168.0.10 --jump REJECT
Resetting iptables:-
sudo iptables --flush && iptables --delete-chain
To delete all rules from the NAT table:-
sudo iptables -t NAT -F
To reject packets coming from the FTP port:-
sudo iptables --append FORWARD --sport ftp --jump REJECT
To delete a rule :-
sudo iptables --delete -A FORWARD -i eth1 -o eth0 -j ACCEPT
To load state extension and establishes a rule that matches and drops both invalid packets and packets for new connections:
sudo iptables --match state --state INVALID,NEW --jump DROP
To log packets from the internet that attempts to create a new connection
sudo iptables -A FORWARD -j LOG
To limit the local systems that can connect to the internet
sudo iptables -t NAT -A POSTROUTING -o eth0 -s 192.168.0.0-192.168.0.32 -j MASQERADE
iptables is being configured to allow the firewall to accept TCP packets for routing when they enter on interface eth0 from any IP address and are destined for an IP address of 192.168.1.58 that is reachable via interface eth1. The source port is in the range 1024 to 65535 and the destination port is port 80
iptables -A FORWARD -s 0/0 -i eth0 -d 192.168.1.58 -o eth1 -p TCP --sport 1024:65535 --dport 80 -j ACCEPT
to accept all packets except those from the Ip address 192.168.0.45
iptables -A INPUT -j ACCEPT ! -s 192.168.0.45
The following example accepts messages coming in that are from (source) any host in the
192.168.0.0 network and that are going (destination) anywhere at all (the -d option is left
out or could be written as -d 0/0):
iptables -A INPUT -s 192.168.0.0/24 -j ACCEPT
allow responses from connections we have initiated if for instance we use a web browser to visit a website we want to allow the website that we ask for to come through the web site
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
smurf attack,to prevent flooding a network with ping messages only allow 1 ping request/second
iptables -A INPUT -p icmp -m icmp -m limit -limit 1/second -j ACCEPT
Anti spoofing:-drop packets that claim to be from loopback IP but coming on the physical network interface
iptables -A INPUT --in-interface !lo --source 127.0.0.0/8 -j DROP
to allow local access to printers and to disallow internet access to printers
iptables -A services -m iprange --src-range 192.168.1.1-192.168.1.254 -p tcp --dport 631 -j ACCEPT
To see what services we have running on our machine and what ports they are using
netstat --inet -pln
Ex:-
This rule will drop all packets from the 172.16.0.0/16 network.
iptables -t filter -A INPUT -s 172.16/16 -j DROP
Ex:-
To use ip6tables to drop all packets from the IPv6 network range 2001:DB8::/32,
we would use a rule like:
[root@serverA ~]# ip6tables -t filter -A INPUT -s 2001:DB8::/32 -j DROP
Ex:-
This rule will allow all packets going through the FORWARD chain that are destined for the 10.100.93.0/24 network.
iptables -t filter -A FORWARD -d 10.100.93.0/24 -j ACCEPT
to disable ping on loopback interface
iptables -A input -s 127.0.0.1 -p icmp -j DROP
to drop the above rule
iptables -D input -s 127.0.0.1 -p icmp -j DROP
specifying rule number so deletion can work
iptables -D INPUT 1
letting a specific remote network address access a local webserver
iptables -A INPUT -s ! example.com -d 64.41.64.124 -p TCP -sport 80 -j DROP
blocking the gateway/firewall machine
iptables -A input -s 66.98.216/24 -d 64.41.64/24 -j DROP
Adding user defined chain
#create a chain to block new connections , except established locally
iptables -N block
iptables -A block -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A block -m state --state NEW -i ! ppp0 -j ACCEPT
iptables -A block -j DROP # DROP everything else not accepted
# jump to that chain from INPUT and FORWARD chains
iptables -A INPUT -j block
iptables -A FORWARD -j block
Enabling the masquerade
modprobe iptables_nat # load the kernel module
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
.........................................................................................
sudo iptables -t NAT -A POSTROUTING -o eth0 -j MASQUERADE
sudo iptables -t NAT -A PREROUTING -p tcp -d 66.187.232.50 -j DNAT --to-destination 192.168.0.10
sudo iptables -A FORWARD -i eth0 -o eth1 -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -s www.xyz.com -j DROP
iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
iptables -A OUTPUT -d 72.14.221.85 -j REJECT
iptables -A OUTPUT -d 72.14.209.86 -j REJECT
iptables -A OUTPUT -d 72.14.209.87 -j REJECT
iptables -A OUTPUT -d 64.233.161.85 -j REJECT
iptables -A OUTPUT -d 209.85.129.85 -j REJECT
iptables -A OUTPUT -d 209.85.141.85 -j REJECT
iptables -A OUTPUT -d 216.187.118.219 -j REJECT
iptables -A OUTPUT -d 209.85.143.189 -j REJECT
iptables -A OUTPUT -d 202.138.103.100 -j REJECT
copying rules to and from the kernel
The iptables-save utiliy copies packet filtering rules from the kernel to standard output so you can save them in a file. the iptables-restore utility copies rules from standard input, as written by iptables-save, to the kernel.
Thursday, January 15, 2009
who is accessing this File
as long as a file or directory is accessed by a user or a process, that directory or file cannot be deleted, nor can the volume it is stored on be un mounted. so one question often asked by system administrators is, "who is using this file or directory at this moment". most *NIX systems provide
the command fuser to answer this question. it allows you to identify the number of the process accessing the file or the directory at the moment. if the number of the locking process is known, then using a command like ps reveals the user account running said process.
ex:-
fuser /home/user
/home/user: 6012c 6126c 6146c 6154c 6224c 6225c 6228c 6235c 6240c 6244c 6246c 6284c 6285c 6287c 6338c 7179c 8828c 12565c 12582c 14292c 14296c
user@ubuntu:~$ ps -ef | grep 6012 | grep -v grep
user 6012 5772 0 09:53 ? 00:00:01 x-session-manager
user 6121 6012 0 09:53 ? 00:00:00 /usr/bin/seahorse-agent --execute x-session-manager
user 6126 6012 0 09:53 ? 00:00:10 gnome-settings-daemon
user 6146 6012 0 09:53 ? 00:00:00 /bin/sh /usr/bin/compiz --sm-client-id default0
user 6148 6012 0 09:53 ? 00:00:47 gnome-panel --sm-client-id default1
user 6154 6012 0 09:53 ? 00:00:26 nautilus --no-default-window --sm-client-id default2
user 6225 6012 0 09:53 ? 00:00:00 bluetooth-applet --singleton
user 6228 6012 0 09:53 ? 00:00:09 update-notifier
user 6235 6012 0 09:53 ? 00:00:00 tracker-applet
user 6240 6012 0 09:53 ? 00:00:00 trackerd
user 6244 6012 0 09:53 ? 00:00:00 python /usr/share/system-config-printer/applet.py
user 6246 6012 0 09:53 ? 00:00:13 nm-applet --sm-disable
source:- Erik M Keller
the command fuser to answer this question. it allows you to identify the number of the process accessing the file or the directory at the moment. if the number of the locking process is known, then using a command like ps reveals the user account running said process.
ex:-
fuser /home/user
/home/user: 6012c 6126c 6146c 6154c 6224c 6225c 6228c 6235c 6240c 6244c 6246c 6284c 6285c 6287c 6338c 7179c 8828c 12565c 12582c 14292c 14296c
user@ubuntu:~$ ps -ef | grep 6012 | grep -v grep
user 6012 5772 0 09:53 ? 00:00:01 x-session-manager
user 6121 6012 0 09:53 ? 00:00:00 /usr/bin/seahorse-agent --execute x-session-manager
user 6126 6012 0 09:53 ? 00:00:10 gnome-settings-daemon
user 6146 6012 0 09:53 ? 00:00:00 /bin/sh /usr/bin/compiz --sm-client-id default0
user 6148 6012 0 09:53 ? 00:00:47 gnome-panel --sm-client-id default1
user 6154 6012 0 09:53 ? 00:00:26 nautilus --no-default-window --sm-client-id default2
user 6225 6012 0 09:53 ? 00:00:00 bluetooth-applet --singleton
user 6228 6012 0 09:53 ? 00:00:09 update-notifier
user 6235 6012 0 09:53 ? 00:00:00 tracker-applet
user 6240 6012 0 09:53 ? 00:00:00 trackerd
user 6244 6012 0 09:53 ? 00:00:00 python /usr/share/system-config-printer/applet.py
user 6246 6012 0 09:53 ? 00:00:13 nm-applet --sm-disable
source:- Erik M Keller
fold command
when you are writing documentation with, for example , vi, the lines tend to get really long. the length of a line while working is usually no problem, but as soon as the document is finished, those long lines sometimes get in the way and are hard to grasp.
fortunately, more people used to have to deal with that problem , and the result is the command fold.fold does one thing; it folds longer lines to a more manageable 80 characters(default), or any other length using the -w parameter. to make sure fold does not break a word , the option -s is used to break a line only at the nearest whitespace character.
ex:- fold -s file_with_long_lines.txt
source :- Erik M Keller
fortunately, more people used to have to deal with that problem , and the result is the command fold.fold does one thing; it folds longer lines to a more manageable 80 characters(default), or any other length using the -w parameter. to make sure fold does not break a word , the option -s is used to break a line only at the nearest whitespace character.
ex:- fold -s file_with_long_lines.txt
source :- Erik M Keller
setuid & setgid
The bits with octal values 4000 and 2000 are the setuid and setgid bits, when set on executable files , these bits allow programs to access files and processes that would be otherwise be off limits to the user that runs them
Using Symbols
To add both the User ID and Group ID permissions to a file, you use the s option. The
following example adds the User ID permission to the pppd program, which is owned by
the root user. When an ordinary user runs pppd, the root user retains ownership, allowing
the pppd program to change root-owned files.
# chmod +s /usr/sbin/pppd
The Set User ID and Set Group ID permissions show up as an s in the execute position of the owner and group segments. Set User ID and Group ID are essentially variations of the execute permission, x. Read, write, and User ID permissions are rws instead of rwx.
# ls -l /usr/sbin/pppd
-rwsr-sr-x 1 root root 184412 Jan 24 22:48 /usr/sbin/pppd
Using the Binary Method
For the ownership permissions, you add another octal number to the beginning of the octal digits. The octal digit for User ID permission is 4 (100) and for Group ID, it is 2 (010) (use 6 to set both—110). The following example sets the User ID permission to the pppd program,along with read and execute permissions for the owner, group, and others:
# chmod 4555 /usr/sbin/pppd
Using Symbols
To add both the User ID and Group ID permissions to a file, you use the s option. The
following example adds the User ID permission to the pppd program, which is owned by
the root user. When an ordinary user runs pppd, the root user retains ownership, allowing
the pppd program to change root-owned files.
# chmod +s /usr/sbin/pppd
The Set User ID and Set Group ID permissions show up as an s in the execute position of the owner and group segments. Set User ID and Group ID are essentially variations of the execute permission, x. Read, write, and User ID permissions are rws instead of rwx.
# ls -l /usr/sbin/pppd
-rwsr-sr-x 1 root root 184412 Jan 24 22:48 /usr/sbin/pppd
Using the Binary Method
For the ownership permissions, you add another octal number to the beginning of the octal digits. The octal digit for User ID permission is 4 (100) and for Group ID, it is 2 (010) (use 6 to set both—110). The following example sets the User ID permission to the pppd program,along with read and execute permissions for the owner, group, and others:
# chmod 4555 /usr/sbin/pppd
Wednesday, January 14, 2009
How to find the libraries a program requires ?
if you should need to know which libraries a program needs to work, to copy that program to another system, for example , or in case you receive an error message about missing libraries , you could use ldd to determine what is required.
Ex:-
ldd /sbin/fdisk
linux-gate.so.1 => (0xb7f7d000)
libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb7e1b000)
/lib/ld-linux.so.2 (0xb7f7e000)
ldd /bin/cat
linux-gate.so.1 => (0xb7f21000)
libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb7dbf000)
/lib/ld-linux.so.2 (0xb7f22000)
source Erik M Keller
Ex:-
ldd /sbin/fdisk
linux-gate.so.1 => (0xb7f7d000)
libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb7e1b000)
/lib/ld-linux.so.2 (0xb7f7e000)
ldd /bin/cat
linux-gate.so.1 => (0xb7f21000)
libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb7dbf000)
/lib/ld-linux.so.2 (0xb7f22000)
source Erik M Keller
finding and deleting core files ?
sudo find / -name core -exec rm -i '{}' \;
Mind the use of '{}' and \; (there is a space before \;). '{}' matches the file that was found, while \; terminate the exec statement.
Mind the use of '{}' and \; (there is a space before \;). '{}' matches the file that was found, while \; terminate the exec statement.
syslogd and syslog.conf
The syslogd daemon manages all the logs on your system and coordinates with any of the logging operations of other systems on your network. Configuration information for syslogd is held in the /etc/syslog.conf file, which contains the names and locations for your system log files. Here you find entries for /var/log/messages and /var/log/maillog, among others. Whenever you make changes to the syslog.conf file, you need to restart the syslogd daemon.
NB:- source Richard Petersen
NB:- source Richard Petersen
Monday, January 12, 2009
Sunday, January 11, 2009
File Utility
The File utility performs a series of tests on each of the specified files in an attempt to classify it.With text files, the utility tries to determine the programming language by reading the first 512 bytes.
For executable files, the utility displays information about the platform, version and structure of the files libraries.
EX:-
user@ubuntu:~$ file /bin/cat
/bin/cat: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.6.8, dynamically linked (uses shared libs), stripped
user@ubuntu:~$ file ./hello.rb
./hello.rb: a /usr/bin/ruby1.8 script text executable
For executable files, the utility displays information about the platform, version and structure of the files libraries.
EX:-
user@ubuntu:~$ file /bin/cat
/bin/cat: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.6.8, dynamically linked (uses shared libs), stripped
user@ubuntu:~$ file ./hello.rb
./hello.rb: a /usr/bin/ruby1.8 script text executable
system admin vs network admin
sulekha
what exactly is the difference b/w sys admin and a n/w admin ? in my experience they both are the same, isn't it ? , but i keep hearing from some people that they are different ? what exactly is the true story ?
norbert74
My understanding is that they have a different focus. A sysadmin works mainly on servers like installing software, configuration etc.
A network admin works mainly on routers, switches and so on.
If you have package loss you go to the network admin, if you need an upgrade on a web server you ask the sysadmin.
acid kewpie
Well generally the network admin looks after the network and the systems admin looks after the systems. Sorry if that sounds crude / rude, but the clue is in the title. Of course many many many businesses combine the roles formally or informally, but past a certain size of infrastructure the work on the servers, server operating systems and applications is seperated from the work on the routers, switches and firewalls. Maybe you'd be suprised how many sysadmins have no idea about TCP/IP past an ip address and default gateway.
acid kewpie
personally I go to Fedex when I have a package loss. Packet loss i might talk to the network guys. ;-)
sulekha
Originally Posted by acid_kewpie
Maybe you'd be suprised how many sysadmins have no idea about TCP/IP past an ip address and default gateway.
well I haven't seen a system admin like that , what i have seen is the combination sort of thing, the guys who do both stuff
acid kewpie
Well that's down to the way a company works. If you can afford people who can cover an entire infrastructure at a suitable level then that's the better option usually - until sheer vast scale makes it unworkable. I used to be a strictly assigned network admin in a team of about 5 people at the higher technical level. Currently I'm right across the board with Linux and Network as specialities in a flexible pool of about 40... horses for courses really.
what exactly is the difference b/w sys admin and a n/w admin ? in my experience they both are the same, isn't it ? , but i keep hearing from some people that they are different ? what exactly is the true story ?
norbert74
My understanding is that they have a different focus. A sysadmin works mainly on servers like installing software, configuration etc.
A network admin works mainly on routers, switches and so on.
If you have package loss you go to the network admin, if you need an upgrade on a web server you ask the sysadmin.
acid kewpie
Well generally the network admin looks after the network and the systems admin looks after the systems. Sorry if that sounds crude / rude, but the clue is in the title. Of course many many many businesses combine the roles formally or informally, but past a certain size of infrastructure the work on the servers, server operating systems and applications is seperated from the work on the routers, switches and firewalls. Maybe you'd be suprised how many sysadmins have no idea about TCP/IP past an ip address and default gateway.
acid kewpie
personally I go to Fedex when I have a package loss. Packet loss i might talk to the network guys. ;-)
sulekha
Originally Posted by acid_kewpie
Maybe you'd be suprised how many sysadmins have no idea about TCP/IP past an ip address and default gateway.
well I haven't seen a system admin like that , what i have seen is the combination sort of thing, the guys who do both stuff
acid kewpie
Well that's down to the way a company works. If you can afford people who can cover an entire infrastructure at a suitable level then that's the better option usually - until sheer vast scale makes it unworkable. I used to be a strictly assigned network admin in a team of about 5 people at the higher technical level. Currently I'm right across the board with Linux and Network as specialities in a flexible pool of about 40... horses for courses really.
Mark Shuttleworth on ubuntu linux
ABSTRACT An overview of Ubuntu Linux given by Mark Shuttleworth at the Ubuntu Linux Developers Summit. Credits: Speaker:Mark Shuttleworth, November 9, 2006
Wednesday, January 7, 2009
Saturday, January 3, 2009
In the Linux file system what does the name usr , sbin stands for ?
sulekha
In the Linux file system what does the name usr , sbin stands for ?
some say that
sbin means secure binary or system binary ?
usr means user or unix system resources ?
can any one give the correct explanation ?
NB: i know the purpose of usr and sbin directories, i just want to know the naming thing
BBI Nexus BBI
As i understand it:
USR = Unix System Resource
SBIN = Superuser Binaries
RealPSL
Re: unix system resources
Interesting question with an answer here http://www.itworld.com/nlsunix071101
sulekha
in the same lines what about mnt,opt,srv and sys ?
mcduck
/mnt is for mounting devices, either for temporary mounting of a single device directly to /mnt, or for mounting non-removable drives into their own directories under /mnt.
(/media is for removable drives)
/opt is for optional programs, typically you'd install things that are from outside of your distributions package sources into /opt.
/sys includes kernel, firmware and system related files.
I'm not sure about /srv, I believe it has something to do with running servers/services.
/usr is for all user-related files & programs. Everything normal (non-admin) users need goes here. For example you'll find binary files for most of your desktop programs, like browsers, music players etc. in /usr/bin and all system-wide installed themes in /usr/share/themes. Most of the documentation and help files for your programs are in /usr/share/doc.
You can read pretty good explanations of the directory structure here: http://www.pathname.com/fhs/pub/fhs-2.3.html
albinootje
http://www.linux.org/docs/ldp/howto/HighQuality-Apps-HOWTO/fhs.html
Richard petersen
The sysfs file system is a virtual file system that provides a hierarchical map of your kernel-
supported devices such as PCI devices, buses, and block devices, as well as supporting kernel
modules. The classes subdirectory will list all your supported devices by category, such as
network and sound devices. With sysfs your system can easily determine the device file with
which a particular device is associated. This is very helpful for managing removable devices
In the Linux file system what does the name usr , sbin stands for ?
some say that
sbin means secure binary or system binary ?
usr means user or unix system resources ?
can any one give the correct explanation ?
NB: i know the purpose of usr and sbin directories, i just want to know the naming thing
BBI Nexus BBI
As i understand it:
USR = Unix System Resource
SBIN = Superuser Binaries
RealPSL
Re: unix system resources
Interesting question with an answer here http://www.itworld.com/nlsunix071101
sulekha
in the same lines what about mnt,opt,srv and sys ?
mcduck
/mnt is for mounting devices, either for temporary mounting of a single device directly to /mnt, or for mounting non-removable drives into their own directories under /mnt.
(/media is for removable drives)
/opt is for optional programs, typically you'd install things that are from outside of your distributions package sources into /opt.
/sys includes kernel, firmware and system related files.
I'm not sure about /srv, I believe it has something to do with running servers/services.
/usr is for all user-related files & programs. Everything normal (non-admin) users need goes here. For example you'll find binary files for most of your desktop programs, like browsers, music players etc. in /usr/bin and all system-wide installed themes in /usr/share/themes. Most of the documentation and help files for your programs are in /usr/share/doc.
You can read pretty good explanations of the directory structure here: http://www.pathname.com/fhs/pub/fhs-2.3.html
albinootje
http://www.linux.org/docs/ldp/howto/HighQuality-Apps-HOWTO/fhs.html
Richard petersen
The sysfs file system is a virtual file system that provides a hierarchical map of your kernel-
supported devices such as PCI devices, buses, and block devices, as well as supporting kernel
modules. The classes subdirectory will list all your supported devices by category, such as
network and sound devices. With sysfs your system can easily determine the device file with
which a particular device is associated. This is very helpful for managing removable devices
Subscribe to:
Comments (Atom)



