Concatenate Files and Number the Lines
cat -n file1 file2
When working with poems and source code, it's really nice to have numbered lines so that references are clear. If you want to generate line numbers when you use cat, add the -n option (or --number).
$ cat -n housman_-_rue.txt quarles_-_the_world.txt
1 WITH rue my heart is laden
2 For golden friends I had,
3 For many a rose-lipt maiden
4 And many a lightfoot lad.
5 By brooks too broad for leaping
6 The lightfoot boys are laid;
7 The rose-lipt girls are sleeping
8 In fields where roses fade.
9 The world's an Inn; and I her guest.
10 I eat; I drink; I take my rest.
11 My hostess, nature, does deny me
12 Nothing, wherewith she can supply me;
13 Where, having stayed a while, I pay
14 Her lavish bills, and go my way.
Line numbers can be incredibly useful, and cat provides a quick and dirty way to add them to a file.
Note
For a vastly better cat, check out dog (more information is available at http://opensource.weblogsinc.com/2005/02/17/why-dogs-are-betters-than-cats). Instead of local files, you can use dog to view the HTML source of web pages on stdout, or just a list of images or links on the specified web pages. The dog command converts all characters to lowercase or vice versa; converts line endings to Mac OS, DOS, or Unix; and even allows you to specify a range of characters to output (lines 525, for instance). Not to mention, the man page for dog is one of the funniest ever. This is one dog that knows a lot of new tricks!
source scott granemann
Saturday, February 28, 2009
host command examples
To quickly find the IP address associated with a domain name, use the host command:
$ host www.granneman.com
www.granneman.com is an alias for granneman.com.
granneman.com has address 216.23.180.5
www.granneman.com is an alias for granneman.com.
www.granneman.com is an alias for granneman.com.
granneman.com mail is handled by 30 bhoth.pair.com.
$ host 65.214.39.152
152.39.214.65.in-addr.arpa domain name pointer web.bloglines.com.
$ host www.granneman.com
www.granneman.com is an alias for granneman.com.
granneman.com has address 216.23.180.5
www.granneman.com is an alias for granneman.com.
www.granneman.com is an alias for granneman.com.
granneman.com mail is handled by 30 bhoth.pair.com.
$ host 65.214.39.152
152.39.214.65.in-addr.arpa domain name pointer web.bloglines.com.
tail command examples
View the Constantly Updated Last Lines of a File or Files
tail -f
tail -f --pid=PID# terminates after PID dies.
The great thing about log files is that they constantly change as things happen on your system. The tail command shows you a snapshot of a file, and then deposits you back on the command line. Want to see the log file again? Then run tail again...and again...and again. Blech!
With the -f (or --follow) option, tail doesn't close. Instead, it shows you the last 10 lines of the file (or a different number if you add -n to the mix) as the file changes, giving you a way to watch all the changes to a log file as they happen. This is wonderfully useful if you're trying to figure out just what is happening to a system or program.
For instance, a web server's logs might look like this:
Note
In order to save space, I've removed the IP address, date, and time of the access.
$ tail -f /var/log/httpd/d20srd_org_log_20051201
"GET /srd/skills/bluff.htm HTTP/1.1"...
"GET /srd/skills/senseMotive.htm HTTP/1.1"...
"GET /srd/skills/concentration.htm HTTP/1.1"...
"GET /srd/classes/monk.htm HTTP/1.1"...
"GET /srd/skills/escapeArtist.htm HTTP/1.1"...
tail -f
tail -f --pid=PID# terminates after PID dies.
The great thing about log files is that they constantly change as things happen on your system. The tail command shows you a snapshot of a file, and then deposits you back on the command line. Want to see the log file again? Then run tail again...and again...and again. Blech!
With the -f (or --follow) option, tail doesn't close. Instead, it shows you the last 10 lines of the file (or a different number if you add -n to the mix) as the file changes, giving you a way to watch all the changes to a log file as they happen. This is wonderfully useful if you're trying to figure out just what is happening to a system or program.
For instance, a web server's logs might look like this:
Note
In order to save space, I've removed the IP address, date, and time of the access.
$ tail -f /var/log/httpd/d20srd_org_log_20051201
"GET /srd/skills/bluff.htm HTTP/1.1"...
"GET /srd/skills/senseMotive.htm HTTP/1.1"...
"GET /srd/skills/concentration.htm HTTP/1.1"...
"GET /srd/classes/monk.htm HTTP/1.1"...
"GET /srd/skills/escapeArtist.htm HTTP/1.1"...
spoofing MAC addresses
You can even change (or "spoof") the hardware MAC address for your network device. This is usually only necessary to get around some ISPs' attempts to link Internet service to a specific machine. Be careful with spoofing your MAC address because a mistake can conflict with other network devices, causing problems. If you do decide to spoof your MAC, make sure you use ifconfig by itself to first acquire the default MAC address so you can roll back to that later (by the way, the MAC address shown in this command is completely bogus, so don't try to use it).
# ifconfig eth0 hw ether 00:14:CC:00:1A:00
# ifconfig eth0 hw ether 00:14:CC:00:1A:00
lsof examples
List a User's Open Files
lsof -u
If you want to look at the files a particular user has open (and remember that those include network connections and devices, among many others), add the -u option to lsof, followed by the username (remember that lsof must be run as root).
Note
In order to save space, some of the data you'd normally see when you run lsof has been removed in this and further examples.
# lsof -u scott
List Users for a Particular File
lsof [file]
In the previous section, you saw what files a particular user had open. Let's reverse that, and see who's using a particular file. To do so, simply follow lsof with the path to a file on your system. For instance, let's take a look at who's using the SSH daemon, used to connect remotely to this computer (remember that lsof must be run as root).
List Processes for a Particular Program
To find out the full universe of other files associated with a particular running program, follow lsof with the -c option, and then the name of a running (and therefore "open") program
lsof -c [program]
Ex:
# lsof -c sshd
COMMAND PID USER NAME
sshd 10542 root /lib/ld-2.3.5.so
sshd 10542 root /dev/null
sshd 10542 root 192.168.0.170:ssh->192.168.0.100:4577 (ESTABLISHED)
sshd 10548 scott /usr/sbin/sshdp
sshd 10548 scott 192.168.0.170:ssh->192.168.0.100:4577 (ESTABLISHED)
source scott granemann
lsof -u
If you want to look at the files a particular user has open (and remember that those include network connections and devices, among many others), add the -u option to lsof, followed by the username (remember that lsof must be run as root).
Note
In order to save space, some of the data you'd normally see when you run lsof has been removed in this and further examples.
# lsof -u scott
List Users for a Particular File
lsof [file]
In the previous section, you saw what files a particular user had open. Let's reverse that, and see who's using a particular file. To do so, simply follow lsof with the path to a file on your system. For instance, let's take a look at who's using the SSH daemon, used to connect remotely to this computer (remember that lsof must be run as root).
List Processes for a Particular Program
To find out the full universe of other files associated with a particular running program, follow lsof with the -c option, and then the name of a running (and therefore "open") program
lsof -c [program]
Ex:
# lsof -c sshd
COMMAND PID USER NAME
sshd 10542 root /lib/ld-2.3.5.so
sshd 10542 root /dev/null
sshd 10542 root 192.168.0.170:ssh->192.168.0.100:4577 (ESTABLISHED)
sshd 10548 scott /usr/sbin/sshdp
sshd 10548 scott 192.168.0.170:ssh->192.168.0.100:4577 (ESTABLISHED)
source scott granemann
Friday, February 27, 2009
Quickly Find Out What a Command Does Based on Its Name
man -f
If you know a command's name but don't know what it does, there's a quick and dirty way to find out without requiring you to actually open the man page for that command. Use the -f option (or --whatis), and the command's synopsis appears.
$ man -f ls
ls (1) - list directory contents
source scott grannemann
If you know a command's name but don't know what it does, there's a quick and dirty way to find out without requiring you to actually open the man page for that command. Use the -f option (or --whatis), and the command's synopsis appears.
$ man -f ls
ls (1) - list directory contents
source scott grannemann
Rebuild man's Database of Commands
man -u
Occasionally you'll try to use man to find out information about a command and man reports that there is no page for that command. Before giving up, try again with the -u option (or --update), which forces man to rebuild the database of commands and man pages it uses. It's often a good first step if you think things aren't quite as they should be.
$ man ls
No manual entry for ls
$ man -u ls
LS(1) User Commands LS(1)
NAME
ls - list directory contents
SYNOPSIS
ls [OPTION]... [FILE]...
[Listing condensed due to length]
Occasionally you'll try to use man to find out information about a command and man reports that there is no page for that command. Before giving up, try again with the -u option (or --update), which forces man to rebuild the database of commands and man pages it uses. It's often a good first step if you think things aren't quite as they should be.
$ man ls
No manual entry for ls
$ man -u ls
LS(1) User Commands LS(1)
NAME
ls - list directory contents
SYNOPSIS
ls [OPTION]... [FILE]...
[Listing condensed due to length]
Sort Contents by File Extension
ls -X
The name of a file is not the only thing you can use for alphabetical sorting. You can also sort alphabetically by the file extension. In other words, you can tell ls to group all the files ending with .doc together, followed by files ending with .jpg, and finally finishing with files ending with .txt. Use the -X option (or --sort=extension); if you want to reverse the sort, add the -r option (or --reverse).
The name of a file is not the only thing you can use for alphabetical sorting. You can also sort alphabetically by the file extension. In other words, you can tell ls to group all the files ending with .doc together, followed by files ending with .jpg, and finally finishing with files ending with .txt. Use the -X option (or --sort=extension); if you want to reverse the sort, add the -r option (or --reverse).
Copy Files As Perfect Backups in Another Directory
Copy Files As Perfect Backups in Another Directory
cp -a
You might be thinking right now that cp would be useful for backing up files, and that is certainly true. With a few lines in a bash shell script, however, cp can be an effective way to back up various files and directories. The most useful option in this case would be the -a option (or --archive), which is also equivalent to combining several options: -dpR (or --no-dereference --preserve --recursive). Another way of thinking about it is that -a ensures that cp doesn't follow symbolic links (which could grossly balloon your copy), preserves key file attributes such as owner and timestamp, and recursively follows subdirectories.
source: scott grannemann
cp -a
You might be thinking right now that cp would be useful for backing up files, and that is certainly true. With a few lines in a bash shell script, however, cp can be an effective way to back up various files and directories. The most useful option in this case would be the -a option (or --archive), which is also equivalent to combining several options: -dpR (or --no-dereference --preserve --recursive). Another way of thinking about it is that -a ensures that cp doesn't follow symbolic links (which could grossly balloon your copy), preserves key file attributes such as owner and timestamp, and recursively follows subdirectories.
source: scott grannemann
How to Visually Display a File's Type ?
How to Visually Display a File's Type ?
Soln: ls -F
Character Meaning
* Executable
/ Directory
@ Symbolic link
| FIFO
= Socket
Soln: ls -F
Character Meaning
* Executable
/ Directory
@ Symbolic link
| FIFO
= Socket
Monday, February 23, 2009
modinfo
the modinfo command displays information about one or more modules.
Ex:
/sbin/modinfo floppy
filename: /lib/modules/2.6.24-21-generic/kernel/drivers/block/floppy.ko
alias: block-major-2-*
license: GPL
author: Alain L. Knaff
srcversion: 82A3853812A05EA35470909
depends:
vermagic: 2.6.24-21-generic SMP mod_unload 586
parm: floppy:charp
parm: FLOPPY_IRQ:int
parm: FLOPPY_DMA:int
Ex:
/sbin/modinfo floppy
filename: /lib/modules/2.6.24-21-generic/kernel/drivers/block/floppy.ko
alias: block-major-2-*
license: GPL
author: Alain L. Knaff
srcversion: 82A3853812A05EA35470909
depends:
vermagic: 2.6.24-21-generic SMP mod_unload 586
parm: floppy:charp
parm: FLOPPY_IRQ:int
parm: FLOPPY_DMA:int
df
df displays information about mounted filesystems. if you add the -T option, then the filesystem type is included in the display.
df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda1 ext3 76058116 11954216 60270796 17% /
varrun tmpfs 252996 120 252876 1% /var/run
varlock tmpfs 252996 0 252996 0% /var/lock
udev tmpfs 252996 64 252932 1% /dev
devshm tmpfs 252996 12 252984 1% /dev/shm
lrm tmpfs 252996 39780 213216 16% /lib/modules/2.6.24-21-generic/volatile
/dev/sdb5 ext3 242271360 30132684 199928880 14% /data
gvfs-fuse-daemon
fuse.gvfs-fuse-daemon 76058116 11954216 60270796 17% /home/zodiac/.gvfs
/dev/sdc1 vfat 1951200 8320 194288
to display inode usage
df -i -x tmpfs
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 4792320 192604 4599716 5% /
/dev/sdb5 15269888 341 15269547 1% /data
gvfs-fuse-daemon 4792320 192604 4599716 5% /home/zodiac/.gvfs
/dev/sdc1 0 0 0 - /media/disk
if you aren't sure which filesystem a particular part of your directory tree lives on, you can give the df command a parameter of a directory name or even a filename
Ex:
df -H ~zodiac/myfile.txt
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 78G 13G 62G 17% /
Watching the disk space
If you want to repeat a command many times, for example you are monitoring something, then don’t forget about the watch command. It will print the results of the command to screen every 2 seconds (you can change the interval with -n).
watch --no-title "df -h"
df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda1 ext3 76058116 11954216 60270796 17% /
varrun tmpfs 252996 120 252876 1% /var/run
varlock tmpfs 252996 0 252996 0% /var/lock
udev tmpfs 252996 64 252932 1% /dev
devshm tmpfs 252996 12 252984 1% /dev/shm
lrm tmpfs 252996 39780 213216 16% /lib/modules/2.6.24-21-generic/volatile
/dev/sdb5 ext3 242271360 30132684 199928880 14% /data
gvfs-fuse-daemon
fuse.gvfs-fuse-daemon 76058116 11954216 60270796 17% /home/zodiac/.gvfs
/dev/sdc1 vfat 1951200 8320 194288
to display inode usage
df -i -x tmpfs
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 4792320 192604 4599716 5% /
/dev/sdb5 15269888 341 15269547 1% /data
gvfs-fuse-daemon 4792320 192604 4599716 5% /home/zodiac/.gvfs
/dev/sdc1 0 0 0 - /media/disk
if you aren't sure which filesystem a particular part of your directory tree lives on, you can give the df command a parameter of a directory name or even a filename
Ex:
df -H ~zodiac/myfile.txt
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 78G 13G 62G 17% /
Watching the disk space
If you want to repeat a command many times, for example you are monitoring something, then don’t forget about the watch command. It will print the results of the command to screen every 2 seconds (you can change the interval with -n).
watch --no-title "df -h"
Disk utilization:du
Disk utilization:du
-c produces a grand total at the end of the run.
-h human readable format
-s summarizes
-k prints size in kilobytes
ex: du -hc *
du -shc /usr/*
the du command displays information about the filename or filenames given as parameters. if a directory name is given , then du recurses and calculates sizes for every file ands subdirectory of the given directory.
Ex: zodiac@ubuntu:/data$ du -bh | more
519M ./digitalsystems
537M ./electronics chitralekha
du: cannot read directory `./lost+found': Permission denied
16K ./lost+found
350M ./uboontoo
3.6G ./datacommunications
1012M ./ruby
3.2G ./datastructures
225M ./computerorganization
4.2G ./internettechnologies
431M ./ignou
74M ./Ajax
-c produces a grand total at the end of the run.
-h human readable format
-s summarizes
-k prints size in kilobytes
ex: du -hc *
du -shc /usr/*
the du command displays information about the filename or filenames given as parameters. if a directory name is given , then du recurses and calculates sizes for every file ands subdirectory of the given directory.
Ex: zodiac@ubuntu:/data$ du -bh | more
519M ./digitalsystems
537M ./electronics chitralekha
du: cannot read directory `./lost+found': Permission denied
16K ./lost+found
350M ./uboontoo
3.6G ./datacommunications
1012M ./ruby
3.2G ./datastructures
225M ./computerorganization
4.2G ./internettechnologies
431M ./ignou
74M ./Ajax
finding files by timestamp
Ex: to find all the files modified within the last 2 days. a day in this case is a 24 hour period relative to the current date and time. Note that you would use
-atime if you wanted files based on access time rather than modification time.
find . -mtime -2 -type f -exec ls -l '{}' \;
Ex: adding the daystart option means that we want to consider days as calendar days, starting at midnight
find . -daystart -mtime -2 -type f -exec ls -l '{}' \;
Ex: to find files modified between 1 hour and 10 hours ago
find . -mmin -600 -mmin +60 -type f -exec ls -l '{}' \;
Listing Newest Files First
Use the `−t' option with ls to sort a directory listing so that the newest files are listed first.
• To list all of the files in the `/usr/tmp' directory sorted with newest first, type:
$ ls −t /usr/tmp RET
-atime if you wanted files based on access time rather than modification time.
find . -mtime -2 -type f -exec ls -l '{}' \;
Ex: adding the daystart option means that we want to consider days as calendar days, starting at midnight
find . -daystart -mtime -2 -type f -exec ls -l '{}' \;
Ex: to find files modified between 1 hour and 10 hours ago
find . -mmin -600 -mmin +60 -type f -exec ls -l '{}' \;
Listing Newest Files First
Use the `−t' option with ls to sort a directory listing so that the newest files are listed first.
• To list all of the files in the `/usr/tmp' directory sorted with newest first, type:
$ ls −t /usr/tmp RET
setting mtime with touch
the touch command can set a files mtime to a specific date and time using either d or t options. the d option is very flexible in the date and time formats that it will accept.
touch -t 200511051510 file1
touch -d 11am file4
touch -d "last fortnight" file5
touch -d "yesterday 6am" file6
touch -d "2 days ago 12:00" file7
touch -d "tomorrow 02:00" file8
touch -d "5 Nov" file9
touch -t 200511051510 file1
touch -d 11am file4
touch -d "last fortnight" file5
touch -d "yesterday 6am" file6
touch -d "2 days ago 12:00" file7
touch -d "tomorrow 02:00" file8
touch -d "5 Nov" file9
how to list hard disks found during boot up ?
dmesg | grep "[hs]d[a-z]"
lspci o/p for usb devices
zodiac@ubuntu:~$ lspci | grep -i usb
00:1d.0 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 (rev 01)
00:1d.1 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 (rev 01)
00:1d.2 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 (rev 01)
00:1d.3 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 (rev 01)
00:1d.7 USB Controller: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller (rev 01)
lspci o/p for usb devices
zodiac@ubuntu:~$ lspci | grep -i usb
00:1d.0 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 (rev 01)
00:1d.1 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 (rev 01)
00:1d.2 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 (rev 01)
00:1d.3 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 (rev 01)
00:1d.7 USB Controller: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller (rev 01)
using ld-linux.so to display library requirements
zodiac@ubuntu:~$ /lib/ld-linux.so.2 --list /bin/ln
linux-gate.so.1 => (0xb7eea000)
libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb7d8a000)
/lib/ld-linux.so.2 (0xb7eeb000)
linux-gate.so.1 => (0xb7eea000)
libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb7d8a000)
/lib/ld-linux.so.2 (0xb7eeb000)
Dynamic library configuration
how does the dynamic loader know where to look for executables ?
there are 2 configuration files , /etc/ld.so.conf and /etc/ld/so.cache
cat /etc/ld.so.conf
include /etc/ld.so.conf.d/*.conf
ldconfig - configure dynamic linker run-time bindings
Loading of programs needs to be fast, so the ld.so.conf file is processed to the ldconfig command to process all the libraries from ld.so.conf as well as from those trusted directories ,
/lib and /usr/lib. the dynamic loader uses the ld.conf.cache file to locate files that are to be dynamically loaded and linked. normally we use ldconfig command without parameters to rebuild ld.so.cache
ldconfig -p | more
775 libs found in cache `/etc/ld.so.cache'
libzephyr.so.3 (libc6) => /usr/lib/libzephyr.so.3
libz.so.1 (libc6) => /usr/lib/libz.so.1
libz.so (libc6) => /usr/lib/libz.so
libx264.so.57 (libc6) => /usr/lib/libx264.so.57
libx86.so.1 (libc6) => /lib/libx86.so.1
libx11globalcomm.so.1 (libc6) => /usr/lib/libx11globalcomm.so.1
there are 2 configuration files , /etc/ld.so.conf and /etc/ld/so.cache
cat /etc/ld.so.conf
include /etc/ld.so.conf.d/*.conf
ldconfig - configure dynamic linker run-time bindings
Loading of programs needs to be fast, so the ld.so.conf file is processed to the ldconfig command to process all the libraries from ld.so.conf as well as from those trusted directories ,
/lib and /usr/lib. the dynamic loader uses the ld.conf.cache file to locate files that are to be dynamically loaded and linked. normally we use ldconfig command without parameters to rebuild ld.so.cache
ldconfig -p | more
775 libs found in cache `/etc/ld.so.cache'
libzephyr.so.3 (libc6) => /usr/lib/libzephyr.so.3
libz.so.1 (libc6) => /usr/lib/libz.so.1
libz.so (libc6) => /usr/lib/libz.so
libx264.so.57 (libc6) => /usr/lib/libx264.so.57
libx86.so.1 (libc6) => /lib/libx86.so.1
libx11globalcomm.so.1 (libc6) => /usr/lib/libx11globalcomm.so.1
Saturday, February 21, 2009
run as a lesser user in order to help security.
some applications that are started by the root user give their permissions to run as a lesser user in order to help security.
Ex:
The apache web server,must be started by the root user in order to listen to port 80(only root users can bind to ports lower than 1024), but it then gives up its root permissions and start all of its threads as lesser user. (typically the user "nobody" , "apache" , or "www" )
can any one give more examples for this ?
Tim greer
Only ports below 1024 need to bind as the root user originally. You can bind to higher ports as non privileged users as the parent, but only a privileged user can drop and regain higher privileged uid/gids. Other examples are in many services; Apache, FTP, Named, etc.
maxwell lol
There are many programs that have SETUID to root permission, and drop permissons after authenticating someone, establishing a connection, etc.
X
sudo
sendmail
These commands will list some setuid programs
find /usr -perm -4000 -type f | xargs ls -l
unruh
I think he means that a program is started by root, and then drops root priv. For example programs started up on boot are usually started up by root (/etc/rc.local, /etc/rc?.d, ...) and many then drop root. For example, httpd is NOT suid root. It is run by root, attaches, and then
drops priv to apache, or whatever the user is.
Tim greer
Also, for the OP's sake, they should be aware that a program doesn't need to be suid to run as root and drop privs, or regain privs after it's dropped to a non prived user.
The natural philosopher
Ive never been able to run with root privs without being started by a root process or having SUID root.So I would be interested to know how you achieve that
Tim greer
I said (for the OP's sake) that they don't have to be suid, as they might have been confused by the reply, since you only mentioned suid. For their benefit, I stated the programs didn't need to be suid. I'm sure you knew that, so I said it isn't always the case or needed. It does need to run as root somehow, of course, to drop and regain root privs or change uid/gid.
florian diesch
init is neither SUID nor started by a root process
Ex:
The apache web server,must be started by the root user in order to listen to port 80(only root users can bind to ports lower than 1024), but it then gives up its root permissions and start all of its threads as lesser user. (typically the user "nobody" , "apache" , or "www" )
can any one give more examples for this ?
Tim greer
Only ports below 1024 need to bind as the root user originally. You can bind to higher ports as non privileged users as the parent, but only a privileged user can drop and regain higher privileged uid/gids. Other examples are in many services; Apache, FTP, Named, etc.
maxwell lol
There are many programs that have SETUID to root permission, and drop permissons after authenticating someone, establishing a connection, etc.
X
sudo
sendmail
These commands will list some setuid programs
find /usr -perm -4000 -type f | xargs ls -l
unruh
I think he means that a program is started by root, and then drops root priv. For example programs started up on boot are usually started up by root (/etc/rc.local, /etc/rc?.d, ...) and many then drop root. For example, httpd is NOT suid root. It is run by root, attaches, and then
drops priv to apache, or whatever the user is.
Tim greer
Also, for the OP's sake, they should be aware that a program doesn't need to be suid to run as root and drop privs, or regain privs after it's dropped to a non prived user.
The natural philosopher
Ive never been able to run with root privs without being started by a root process or having SUID root.So I would be interested to know how you achieve that
Tim greer
I said (for the OP's sake) that they don't have to be suid, as they might have been confused by the reply, since you only mentioned suid. For their benefit, I stated the programs didn't need to be suid. I'm sure you knew that, so I said it isn't always the case or needed. It does need to run as root somehow, of course, to drop and regain root privs or change uid/gid.
florian diesch
init is neither SUID nor started by a root process
Friday, February 20, 2009
sysctl
it is used to configure kernel parameters/ tune some system resources. run sysctl -a to see what variables can be controlled by sysctl and what they are set to.the sysctl utility is most useful to tuning n/w parameters as well as some kernel parameters.
use the file /etc/sysctl.conf to set sysctl parameters at boot time.
use the file /etc/sysctl.conf to set sysctl parameters at boot time.
Wednesday, February 18, 2009
Tuesday, February 17, 2009
Using Chroot
The chroot() system call (pronounced “cha-root”) allows a process and all of its child processes to redefine what they perceive the root directory to be. For example, if you were to chroot("/www") and start a shell, you could find that using the cd command would leave you at /www. The program would believe it is a root directory, but in reality,it would not be. This restriction applies to all aspects of the process’s behavior: where it loads configuration files, shared libraries, data files, etc.
NOTE Once executed, the change in root directory by chroot is irrevocable through the lifetime of the process.
by changing the perceived root directory of the system, a process has a restricted view of what is on the system. Access to other directories, libraries, and configuration files is not available. Because of this restriction, it is necessary for an application to have all of the files necessary for it to work completely contained within the chroot environment. This includes any passwd files, libraries, binaries, and data files.
CAUTION A chroot environment will protect against accessing files outside of the directory,
but it does not protect against system utilization, memory access, kernel access, and interprocess
communication. This means that if there is a security vulnerability that can be taken advantage of by sending signals to another process, it will be possible to exploit it from within a chroot environment.In other words, chroot is not a perfect cure, but rather more of a deterrent.
Every application needs its own set of files and executables, and thus, the directions for making an application work in a chroot environment vary. However, the principle remains the same: Make it all self-contained under a single directory with a faux root directory structure.
An Example Chroot Environment
As an example, let’s create a chroot environment for the BASH shell. We begin by creating the directory we want to put everything into. Since this is just an example, we’ll create a directory in /tmp called myroot.
[root@serverA ~]# mkdir /tmp/myroot
[root@serverA ~]# cd /tmp/myroot
Let’s assume we need only two programs: bash and ls. Let’s create the bin directory under myroot and copy the binaries over there.
[root@serverA myroot]# mkdir bin
[root@serverA myroot]# cp /bin/bash bin/
[root@serverA myroot]# cp /bin/ls bin/
With the binaries there, we now need to check whether these binaries need any libraries. We use the ldd command to determine what (if any) libraries are used by these two programs. We run ldd against /bin/bash, like so:
[root@serverA myroot]# ldd /bin/bash
linux-gate.so.1 => (0x00110000)
libtinfo.so.5 => /lib/libtinfo.so.5 (0x031f3000)
libdl.so.2 => /lib/libdl.so.2 (0x00c1c000)
libc.so.6 => /lib/libc.so.6 (0x00a96000)
/lib/ld-linux.so.2 (0x00a77000)
We also run ldd against /bin/ls, like so:
[root@serverA myroot]# ldd /bin/ls
linux-gate.so.1 => (0x00110000)
librt.so.1 => /lib/librt.so.1 (0x0043b000)
libselinux.so.1 => /lib/libselinux.so.1 (0x0041e000)
libacl.so.1 => /lib/libacl.so.1 (0x00a47000)
libc.so.6 => /lib/libc.so.6 (0x00a96000)
libpthread.so.0 => /lib/libpthread.so.0 (0x00c23000)
/lib/ld-linux.so.2 (0x00a77000)
libdl.so.2 => /lib/libdl.so.2 (0x00c1c000)
libattr.so.1 => /lib/libattr.so.1 (0x00a40000)
Now that we know what libraries need to be in place, we create the lib directory and copy the libraries over.First we create the /tmp/myroot/lib directory:
[root@serverA myroot]# mkdir /tmp/myroot/lib
For shared libraries that /bin/bash needs, we run
[root@serverA myroot]# cp /lib/libtinfo.so.5 lib/
[root@serverA myroot]# cp /lib/libdl.so.2 lib/
[root@serverA myroot]# cp /lib/libc.so.6 lib/
[root@serverA myroot]# cp /lib/ld-linux.so.2 lib/
And for /bin/ls, we need
[root@serverA myroot]# cp /lib/librt.so.1 lib/
[root@serverA myroot]# cp /lib/libselinux.so.1 lib/
[root@serverA myroot]# cp /lib/libacl.so.1 lib/
[root@serverA myroot]# cp /lib/libpthread.so.0 lib/
[root@serverA myroot]# cp /lib/libattr.so.1 lib/
Most Linux distros include a little program called chroot that invokes the chroot() system call for us, so we don’t need to write our own C program to do it. The program takes two parameters: the directory that you want to make the root directory and the command that you want to run in the chroot environment. We want to use /tmp/myroot as the directory and start /bin/bash, thus we run:
[root@serverA myroot]# chroot /tmp/myroot /bin/bash
Because there is no /etc/profile or /etc/bashrc to change our prompt, the prompt will change to
bash-3.00#. Now try an ls:
bash-3.00# ls
bin lib
Then try a pwd to view the current working directory:
bash-3.00# pwd
/
NOTE We didn’t need to explicitly copy over the pwd command used previously, because pwd is one of the many BASH built-in commands. It comes with the BASH program that we already copied over. Since we don’t have an /etc/passwd or /etc/group file in the chrooted environment
(to help map numeric user IDs to usernames), an ls -l command will show the raw user ID (UID) values for each file. For example:
bash-3.2# cd lib/
bash-3.2# ls -l
-rwxr-xr-x 1 0 0 128952 Feb 10 18:09 ld-linux.so.2
-rwxr-xr-x 1 0 0 26156 Feb 10 18:14 libacl.so.1
....
NOTE Once executed, the change in root directory by chroot is irrevocable through the lifetime of the process.
by changing the perceived root directory of the system, a process has a restricted view of what is on the system. Access to other directories, libraries, and configuration files is not available. Because of this restriction, it is necessary for an application to have all of the files necessary for it to work completely contained within the chroot environment. This includes any passwd files, libraries, binaries, and data files.
CAUTION A chroot environment will protect against accessing files outside of the directory,
but it does not protect against system utilization, memory access, kernel access, and interprocess
communication. This means that if there is a security vulnerability that can be taken advantage of by sending signals to another process, it will be possible to exploit it from within a chroot environment.In other words, chroot is not a perfect cure, but rather more of a deterrent.
Every application needs its own set of files and executables, and thus, the directions for making an application work in a chroot environment vary. However, the principle remains the same: Make it all self-contained under a single directory with a faux root directory structure.
An Example Chroot Environment
As an example, let’s create a chroot environment for the BASH shell. We begin by creating the directory we want to put everything into. Since this is just an example, we’ll create a directory in /tmp called myroot.
[root@serverA ~]# mkdir /tmp/myroot
[root@serverA ~]# cd /tmp/myroot
Let’s assume we need only two programs: bash and ls. Let’s create the bin directory under myroot and copy the binaries over there.
[root@serverA myroot]# mkdir bin
[root@serverA myroot]# cp /bin/bash bin/
[root@serverA myroot]# cp /bin/ls bin/
With the binaries there, we now need to check whether these binaries need any libraries. We use the ldd command to determine what (if any) libraries are used by these two programs. We run ldd against /bin/bash, like so:
[root@serverA myroot]# ldd /bin/bash
linux-gate.so.1 => (0x00110000)
libtinfo.so.5 => /lib/libtinfo.so.5 (0x031f3000)
libdl.so.2 => /lib/libdl.so.2 (0x00c1c000)
libc.so.6 => /lib/libc.so.6 (0x00a96000)
/lib/ld-linux.so.2 (0x00a77000)
We also run ldd against /bin/ls, like so:
[root@serverA myroot]# ldd /bin/ls
linux-gate.so.1 => (0x00110000)
librt.so.1 => /lib/librt.so.1 (0x0043b000)
libselinux.so.1 => /lib/libselinux.so.1 (0x0041e000)
libacl.so.1 => /lib/libacl.so.1 (0x00a47000)
libc.so.6 => /lib/libc.so.6 (0x00a96000)
libpthread.so.0 => /lib/libpthread.so.0 (0x00c23000)
/lib/ld-linux.so.2 (0x00a77000)
libdl.so.2 => /lib/libdl.so.2 (0x00c1c000)
libattr.so.1 => /lib/libattr.so.1 (0x00a40000)
Now that we know what libraries need to be in place, we create the lib directory and copy the libraries over.First we create the /tmp/myroot/lib directory:
[root@serverA myroot]# mkdir /tmp/myroot/lib
For shared libraries that /bin/bash needs, we run
[root@serverA myroot]# cp /lib/libtinfo.so.5 lib/
[root@serverA myroot]# cp /lib/libdl.so.2 lib/
[root@serverA myroot]# cp /lib/libc.so.6 lib/
[root@serverA myroot]# cp /lib/ld-linux.so.2 lib/
And for /bin/ls, we need
[root@serverA myroot]# cp /lib/librt.so.1 lib/
[root@serverA myroot]# cp /lib/libselinux.so.1 lib/
[root@serverA myroot]# cp /lib/libacl.so.1 lib/
[root@serverA myroot]# cp /lib/libpthread.so.0 lib/
[root@serverA myroot]# cp /lib/libattr.so.1 lib/
Most Linux distros include a little program called chroot that invokes the chroot() system call for us, so we don’t need to write our own C program to do it. The program takes two parameters: the directory that you want to make the root directory and the command that you want to run in the chroot environment. We want to use /tmp/myroot as the directory and start /bin/bash, thus we run:
[root@serverA myroot]# chroot /tmp/myroot /bin/bash
Because there is no /etc/profile or /etc/bashrc to change our prompt, the prompt will change to
bash-3.00#. Now try an ls:
bash-3.00# ls
bin lib
Then try a pwd to view the current working directory:
bash-3.00# pwd
/
NOTE We didn’t need to explicitly copy over the pwd command used previously, because pwd is one of the many BASH built-in commands. It comes with the BASH program that we already copied over. Since we don’t have an /etc/passwd or /etc/group file in the chrooted environment
(to help map numeric user IDs to usernames), an ls -l command will show the raw user ID (UID) values for each file. For example:
bash-3.2# cd lib/
bash-3.2# ls -l
-rwxr-xr-x 1 0 0 128952 Feb 10 18:09 ld-linux.so.2
-rwxr-xr-x 1 0 0 26156 Feb 10 18:14 libacl.so.1
....
xinetd
xinetd—the name is an acronym for “extended Internet services daemon.” The xinetd program accomplishes the same task as the regular inetd program: It helps to start programs that provide Internet services. Instead of having such programs automatically start up during system initialization and remain unused until a connection request arrives, xinetd instead stands in the gap for those programs and listens on their normal service ports. As a result, when xinetd hears a service request meant for one of the services it manages, it then starts or spurns the appropriate service.
NOTE Your Linux distribution might not have the xinetd software installed out of the box. The xinetd package can be installed with yum on a Fedora distro (or RHEL, Centos) by running
yum install xinetd
On a Debian-based distro like Ubuntu, xinetd can be installed using APT by running
sudo apt-get install xinetd
source :- Steve shah & Wale soyinka
NOTE Your Linux distribution might not have the xinetd software installed out of the box. The xinetd package can be installed with yum on a Fedora distro (or RHEL, Centos) by running
yum install xinetd
On a Debian-based distro like Ubuntu, xinetd can be installed using APT by running
sudo apt-get install xinetd
source :- Steve shah & Wale soyinka
SYN Flood Protection
When TCP initiates a connection, the first thing it does is send a special packet to the destination, with the flag set to indicate the start of a connection. This flag is known as the
SYN flag. The destination host responds by sending an acknowledgment packet back to the source, called (appropriately) a SYNACK. Then the destination waits for the source to return an acknowledgment, showing that both sides have agreed on the parameters of their transaction. Once these three packets are sent (this process is called the “three-way handshake”), the source and destination hosts can transmit data back and forth.
Because it’s possible for multiple hosts to simultaneously contact a single host, it’s important that the destination host keep track of all the SYN packets it gets. SYN entries are stored in a table until the three-way handshake is complete. Once this is done, the connection leaves the SYN tracking table and moves to another table that tracks established connections.
A SYN flood occurs when a source host sends a large number of SYN packets to a destination with no intention of responding to the SYNACK. This results in overflow of the destination host’s tables, thereby making the operating system unstable. Obviously, this is not a good thing.
Linux can prevent SYN floods by using a syncookie, a special mechanism in the kernel that tracks the rate at which SYN packets arrive. If the syncookie detects the rate going above a certain threshold, it begins to aggressively get rid of entries in the SYN table that don’t move to the “established” state within a reasonable interval. A second layer of protection is in the table itself: If the table receives a SYN request that would cause the table to overflow, the request is ignored. This means it may happen that a client will be temporarily unable to connect to the server—but it also keeps the server from crashing altogether and kicking everyone off!
First use the sysctl tool to display the current value for the tcp_syncookie setting. Type
[root@serverA ~]# sysctl net.ipv4.tcp_syncookies
net.ipv4.tcp_syncookies = 0
The output shows that this setting is currently disabled (value=0). To turn on tcp_syncookie support, enter this command:
[root@serverA ~]# sysctl -w net.ipv4.tcp_syncookies=1
net.ipv4.tcp_syncookies = 1
Because /proc entries do not survive system reboots, you should add the following line to the end of your /etc/sysctl.conf configuration file. To do this using the echo command,
type echo "net.ipv4.tcp_syncookies = 1" >> /etc/sysctl.conf
source :- Steve shah & Wale soyinka
SYN flag. The destination host responds by sending an acknowledgment packet back to the source, called (appropriately) a SYNACK. Then the destination waits for the source to return an acknowledgment, showing that both sides have agreed on the parameters of their transaction. Once these three packets are sent (this process is called the “three-way handshake”), the source and destination hosts can transmit data back and forth.
Because it’s possible for multiple hosts to simultaneously contact a single host, it’s important that the destination host keep track of all the SYN packets it gets. SYN entries are stored in a table until the three-way handshake is complete. Once this is done, the connection leaves the SYN tracking table and moves to another table that tracks established connections.
A SYN flood occurs when a source host sends a large number of SYN packets to a destination with no intention of responding to the SYNACK. This results in overflow of the destination host’s tables, thereby making the operating system unstable. Obviously, this is not a good thing.
Linux can prevent SYN floods by using a syncookie, a special mechanism in the kernel that tracks the rate at which SYN packets arrive. If the syncookie detects the rate going above a certain threshold, it begins to aggressively get rid of entries in the SYN table that don’t move to the “established” state within a reasonable interval. A second layer of protection is in the table itself: If the table receives a SYN request that would cause the table to overflow, the request is ignored. This means it may happen that a client will be temporarily unable to connect to the server—but it also keeps the server from crashing altogether and kicking everyone off!
First use the sysctl tool to display the current value for the tcp_syncookie setting. Type
[root@serverA ~]# sysctl net.ipv4.tcp_syncookies
net.ipv4.tcp_syncookies = 0
The output shows that this setting is currently disabled (value=0). To turn on tcp_syncookie support, enter this command:
[root@serverA ~]# sysctl -w net.ipv4.tcp_syncookies=1
net.ipv4.tcp_syncookies = 1
Because /proc entries do not survive system reboots, you should add the following line to the end of your /etc/sysctl.conf configuration file. To do this using the echo command,
type echo "net.ipv4.tcp_syncookies = 1" >> /etc/sysctl.conf
source :- Steve shah & Wale soyinka
where to find listing of all network drivers that are installed for your kernel
You can find a listing of all the network device drivers that are installed for your kernel in the
/lib/modules/`uname -r`/kernel/drivers/net directory,
like so:
[root@serverA etc]# cd /lib/modules/`uname -r`/kernel/drivers/net
[root@serverA net]# ls
If you want to see a driver’s description without having to load the driver itself, use the modinfo command. For example, to see the description of the yellowfin.ko driver, type
[root@serverA net]# modinfo yellowfin | grep -i description
OR
you can do:- strings acenic.ko | grep description
description=AceNIC/3C985/GA620 Gigabit Ethernet driver
/lib/modules/`uname -r`/kernel/drivers/net directory,
like so:
[root@serverA etc]# cd /lib/modules/`uname -r`/kernel/drivers/net
[root@serverA net]# ls
If you want to see a driver’s description without having to load the driver itself, use the modinfo command. For example, to see the description of the yellowfin.ko driver, type
[root@serverA net]# modinfo yellowfin | grep -i description
OR
you can do:- strings acenic.ko | grep description
description=AceNIC/3C985/GA620 Gigabit Ethernet driver
Monday, February 16, 2009
How to alias network protocol family 10 to off, in ubuntu Hardy
Allen kistler
It works because it prevents the ipv6 kernel module from loading, hence disabling any support for IPv6.
Personally, I think a better way, if that's what you want to do, is put "install ipv6 /bin/true" in /etc/modprobe.conf. Don't screw with the
aliases.
What not loading ipv6 does is keep your machine from requesting IPv6 DNS records (AAAA) from your DNS server. Some DNS servers ignore requests for IPv6 addresses, which means the client has to time out the request before it asks for an IPv4 address (A). (If it retries, then it has to
time out all the retries, which could be half a minute or so.) Most DNS servers are well-behaved enough now that they'll return an invalid request result or an unknown result if they don't support IPv6 requests, so the client can make an IPv4 request right away. In other words, for
most people this tip isn't going to improve anything.
FWIW, Google did some initial testing a year or so ago that found a different, but related, problem. If the client is IPv6-enabled, but the Internet connection is IPv4-only, requesting *and*getting* an IPv6 address for a site makes the site completely inaccessible. That's why
they didn't assign an IPv6 address to www.google.com, but instead created a different name (ipv6.google.com) that probably points to the same pool of servers.
pascal Hambourg
Why not just enable IPv6 on Google websites?
We continuously conduct detailed measurements on the quality of IPv6
connectivity, and our latest results show that making Google services
generally available over IPv6 at this time would lead to connection
problems and increased latency for a small number of users. User
experience is very important to us, and we do not want to impact users
on networks that do not yet fully support IPv6. We will continue to
re-evaluate the situation as the IPv6 Internet evolves.
======================================================================
There are lots of IPv6-enabled hosts without global IPV6 connectivity out there. This includes many hosts running GNU/Linux or Windows Vista. If what you wrote were right, *all* these hosts could not reach dual-stack sites, which is fortuntely not the case.
If the client is IPv6-enabled but the host it runs on has no global IPv6 connectivity (i.e. no default IPv6 route or no global address), then non-local IPv6 communications are rejected with a "network unreachable" error, and the client tries again using IPv4.
Only hosts with a broken IPv6 setup, e.g. a default IPv6 route but no good IPv6 connectivity, will experience trouble. This is the "small number of users" Googles talks about.
Things are changing. See .
It works because it prevents the ipv6 kernel module from loading, hence disabling any support for IPv6.
Personally, I think a better way, if that's what you want to do, is put "install ipv6 /bin/true" in /etc/modprobe.conf. Don't screw with the
aliases.
What not loading ipv6 does is keep your machine from requesting IPv6 DNS records (AAAA) from your DNS server. Some DNS servers ignore requests for IPv6 addresses, which means the client has to time out the request before it asks for an IPv4 address (A). (If it retries, then it has to
time out all the retries, which could be half a minute or so.) Most DNS servers are well-behaved enough now that they'll return an invalid request result or an unknown result if they don't support IPv6 requests, so the client can make an IPv4 request right away. In other words, for
most people this tip isn't going to improve anything.
FWIW, Google did some initial testing a year or so ago that found a different, but related, problem. If the client is IPv6-enabled, but the Internet connection is IPv4-only, requesting *and*getting* an IPv6 address for a site makes the site completely inaccessible. That's why
they didn't assign an IPv6 address to www.google.com, but instead created a different name (ipv6.google.com) that probably points to the same pool of servers.
pascal Hambourg
Why not just enable IPv6 on Google websites?
We continuously conduct detailed measurements on the quality of IPv6
connectivity, and our latest results show that making Google services
generally available over IPv6 at this time would lead to connection
problems and increased latency for a small number of users. User
experience is very important to us, and we do not want to impact users
on networks that do not yet fully support IPv6. We will continue to
re-evaluate the situation as the IPv6 Internet evolves.
======================================================================
There are lots of IPv6-enabled hosts without global IPV6 connectivity out there. This includes many hosts running GNU/Linux or Windows Vista. If what you wrote were right, *all* these hosts could not reach dual-stack sites, which is fortuntely not the case.
If the client is IPv6-enabled but the host it runs on has no global IPv6 connectivity (i.e. no default IPv6 route or no global address), then non-local IPv6 communications are rejected with a "network unreachable" error, and the client tries again using IPv4.
Only hosts with a broken IPv6 setup, e.g. a default IPv6 route but no good IPv6 connectivity, will experience trouble. This is the "small number of users" Googles talks about.
Things are changing. See
Environment variables
Environment Variables
Every instance of a shell, and every process that is running, has its own “environment" settings that give it a particular look, feel, and, in some cases, behavior. These settings are
typically controlled by environment variables. Some environment variables have special meanings to the shell, but there is nothing stopping you from defining your own and using them for your own needs. It is through the use of environment variables that most shell scripts are able to do interesting things and remember results from user inputs as well as program outputs.
Printing Environment Variables
To list all of your environment variables, use the printenv command. For example,
[yyang@fedora-serverA ~]$ printenv
HOSTNAME=fedora-serverA.example.org
SHELL=/bin/bash
TERM=xterm
HISTSIZE=1000
...
Every instance of a shell, and every process that is running, has its own “environment" settings that give it a particular look, feel, and, in some cases, behavior. These settings are
typically controlled by environment variables. Some environment variables have special meanings to the shell, but there is nothing stopping you from defining your own and using them for your own needs. It is through the use of environment variables that most shell scripts are able to do interesting things and remember results from user inputs as well as program outputs.
Printing Environment Variables
To list all of your environment variables, use the printenv command. For example,
[yyang@fedora-serverA ~]$ printenv
HOSTNAME=fedora-serverA.example.org
SHELL=/bin/bash
TERM=xterm
HISTSIZE=1000
...
what exactly is the purpose of user "nobody " with user id 65534 ,in ubuntu linux ?
aijarot
Does anyone know what this means? "successful su for nobody by root" "+ ??? root:nobody" "(pam_unix) session opened for user nobody by (uid=0)"
kav
I found a lot of these in my /var/log/auth.log
Dec 18 06:25:03 localhost su[3224]: Successful su for nobody by root
Dec 18 06:25:03 localhost su[3224]: + ??? root:nobody
Dec 18 06:25:03 localhost su[3224]: (pam_unix) session opened for user nobody by (uid=0)
Dec 18 06:25:03 localhost su[3224]: (pam_unix) session closed for user nobody
What does a su for nobody by root mean?
I mean I have plenty of succesfull su for root by (user), but what on earth is so for nobody by root?
I found this 'nobody' in my /etc/passwd file too. Is it used by a program or has my box been compromised like a chump?
redazz
nobody is a system user that is used to run services e.g. apache and samba on Linux distros. Root has to start the service and then pass on control to the user "nobody".
int0x80
As a precautionary measure, I set the shell to /dev/null
Code:
int0x80:~$ grep nobody /etc/passwd
nobody:x:65534:65534:nobody:/nonexistent:/dev/null
Don't forget to add /dev/null as a shell
Code:
echo "/dev/null" >> /etc/shells
redazz
Most distros set the shell for nobody to /bin/false which is similar to your suggestion.
int0x80
It should also be noted that there is a difference between having the shell as /bin/false or /bin/nologin and having the shell as /dev/null. For example, set each of those as the shell for a test user, then attempt to login through SSH on each one. With a shell of /dev/null, an attacker could not be certain whether the attempted user exists on the system -- not the case where /bin/false or /bin/nologin is the shell.
kay
Yes, /dev/null seems to be just a little bit better just for that reason.
techemically
I get this when trying to run this command: desktop:~$ grep nobody /etc/passwd
nobody:x:65534:65534:nobody:/nonexistent:/bin/sh
this "nobody" just popped up one day under my normal profile name and i cannot set it to /dev/null. I get permission denied.
jomen
In light of these option-hints I think the command would have to be:
chsh -s /dev/null nobody or chsh --shell/dev/null nobody
see:
http://www.debian.org/doc/manuals/system-administrator/ch-sysadmin-users.html
http://www.debianhelp.co.uk/usersid.htm
Does anyone know what this means? "successful su for nobody by root" "+ ??? root:nobody" "(pam_unix) session opened for user nobody by (uid=0)"
kav
I found a lot of these in my /var/log/auth.log
Dec 18 06:25:03 localhost su[3224]: Successful su for nobody by root
Dec 18 06:25:03 localhost su[3224]: + ??? root:nobody
Dec 18 06:25:03 localhost su[3224]: (pam_unix) session opened for user nobody by (uid=0)
Dec 18 06:25:03 localhost su[3224]: (pam_unix) session closed for user nobody
What does a su for nobody by root mean?
I mean I have plenty of succesfull su for root by (user), but what on earth is so for nobody by root?
I found this 'nobody' in my /etc/passwd file too. Is it used by a program or has my box been compromised like a chump?
redazz
nobody is a system user that is used to run services e.g. apache and samba on Linux distros. Root has to start the service and then pass on control to the user "nobody".
int0x80
As a precautionary measure, I set the shell to /dev/null
Code:
int0x80:~$ grep nobody /etc/passwd
nobody:x:65534:65534:nobody:/nonexistent:/dev/null
Don't forget to add /dev/null as a shell
Code:
echo "/dev/null" >> /etc/shells
redazz
Most distros set the shell for nobody to /bin/false which is similar to your suggestion.
int0x80
It should also be noted that there is a difference between having the shell as /bin/false or /bin/nologin and having the shell as /dev/null. For example, set each of those as the shell for a test user, then attempt to login through SSH on each one. With a shell of /dev/null, an attacker could not be certain whether the attempted user exists on the system -- not the case where /bin/false or /bin/nologin is the shell.
kay
Yes, /dev/null seems to be just a little bit better just for that reason.
techemically
I get this when trying to run this command: desktop:~$ grep nobody /etc/passwd
nobody:x:65534:65534:nobody:/nonexistent:/bin/sh
this "nobody" just popped up one day under my normal profile name and i cannot set it to /dev/null. I get permission denied.
jomen
In light of these option-hints I think the command would have to be:
chsh -s /dev/null nobody or chsh --shell/dev/null nobody
see:
http://www.debian.org/doc/manuals/system-administrator/ch-sysadmin-users.html
http://www.debianhelp.co.uk/usersid.htm
Sunday, February 15, 2009
Backing Up the MBR
It is easy to do this using the dd command. Since the MBR of a PC’s hard disk resides in the first 512 bytes of the disk, you can easily copy the first 512 bytes to a file (or to a floppy disk) by typing
[root@fedora-serverA ~]# dd if=/dev/sda of=/tmp/COPY_OF_MBR bs=512 count=1
1+0 records in
1+0 records out
[root@fedora-serverA ~]# dd if=/dev/sda of=/tmp/COPY_OF_MBR bs=512 count=1
1+0 records in
1+0 records out
Understanding Bash fork() bomb ~ :(){ :|:& };:
Posted By Vivek Gite On November 26, 2007 @ 6:11 am In BASH Shell, FreeBSD, Linux, Security, UNIX | 14 Comments
Q. Can you explain following bash code or bash fork() bomb?
:(){ :|:& };:
A. This is a bash function. It gets called recursively (recursive function). This is most horrible code for any Unix / Linux box. It is often used by sys admin to test user processes limitations (Linux process limits can be configured via /etc/security/limits.conf and PAM).
Once a successful fork bomb has been activated in a system it may not be possible to resume normal operation without rebooting, as the only solution to a fork bomb is to destroy all instances of it.
[Warning examples may crash your computer] WARNING! These examples may crash your computer if executed.
Understanding :(){ :|:& };: fork() bomb code
:() - It is a function name. It accepts no arguments at all. Generally, bash function is defined as follows:
foo(){
arg1=$1
echo ''
#do_something on $arg argument
}
fork() bomb is defined as follows:
:(){
:|:&
};:
:|: - Next it call itself using programming technique called recursion and pipes the output to another call of the function ':'. The worst part is function get called two times to bomb your system.
& - Puts the function call in the background so child cannot die at all and start eating system resources.
; - Terminate the function definition
: - Call (run) the function aka set the fork() bomb.
Here is more human readable code:
bomb() {
bomb | bomb &
}; bomb
Properly configured Linux / UNIX box should not go down when fork() bomb sets off.
l33t
perl example: perl -e "fork while fork" &
python example:
import os
while(1):
os.fork()
Windows XP / Vista bat file example:
:bomb
start %0
goto bomb
UNIX style for Windows:
%0|%0
C program example:
#include
int main() { while(1) fork(); }
Q. Can you explain following bash code or bash fork() bomb?
:(){ :|:& };:
A. This is a bash function. It gets called recursively (recursive function). This is most horrible code for any Unix / Linux box. It is often used by sys admin to test user processes limitations (Linux process limits can be configured via /etc/security/limits.conf and PAM).
Once a successful fork bomb has been activated in a system it may not be possible to resume normal operation without rebooting, as the only solution to a fork bomb is to destroy all instances of it.
[Warning examples may crash your computer] WARNING! These examples may crash your computer if executed.
Understanding :(){ :|:& };: fork() bomb code
:() - It is a function name. It accepts no arguments at all. Generally, bash function is defined as follows:
foo(){
arg1=$1
echo ''
#do_something on $arg argument
}
fork() bomb is defined as follows:
:(){
:|:&
};:
:|: - Next it call itself using programming technique called recursion and pipes the output to another call of the function ':'. The worst part is function get called two times to bomb your system.
& - Puts the function call in the background so child cannot die at all and start eating system resources.
; - Terminate the function definition
: - Call (run) the function aka set the fork() bomb.
Here is more human readable code:
bomb() {
bomb | bomb &
}; bomb
Properly configured Linux / UNIX box should not go down when fork() bomb sets off.
l33t
perl example: perl -e "fork while fork" &
python example:
import os
while(1):
os.fork()
Windows XP / Vista bat file example:
:bomb
start %0
goto bomb
UNIX style for Windows:
%0|%0
C program example:
#include
int main() { while(1) fork(); }
/etc/security/limits.conf file
How to: Prevent a fork bomb by limiting user process
Posted By Vivek Gite On November 27, 2007 @ 5:28 pm In CentOS, Debian Linux, Howto, Linux, RedHat/Fedora Linux, Security | 13 Comments
[1]
Earlier, I wrote about fork bomb [2], few readers like to know about getting protection against such attacks:
How do I protect my system from a fork bomb under Linux?
Limiting user processes is important for running a stable system. To limit user process just add user name or group or all users to /etc/security/limits.conf file and impose process limitations.
Understanding /etc/security/limits.conf file
Each line describes a limit for a user in the form:
-
Where:
* can be:
o an user name
o a group name, with @group syntax
o the wildcard *, for default entry
o the wildcard %, can be also used with %group syntax, for maxlogin limit
* can have the two values:
o "soft" for enforcing the soft limits
o "hard" for enforcing hard limits
*- can be one of the following:
o core - limits the core file size (KB)
* can be one of the following:
o core - limits the core file size (KB)
o data - max data size (KB)
o fsize - maximum filesize (KB)
o memlock - max locked-in-memory address space (KB)
o nofile - max number of open files
o rss - max resident set size (KB)
o stack - max stack size (KB)
o cpu - max CPU time (MIN)
o nproc - max number of processes
o as - address space limit
o maxlogins - max number of logins for this user
o maxsyslogins - max number of logins on the system
o priority - the priority to run user process with
o locks - max number of file locks the user can hold
o sigpending - max number of pending signals
o msgqueue - max memory used by POSIX message queues (bytes)
o nice - max nice priority allowed to raise to
o rtprio - max realtime priority
o chroot - change root to directory (Debian-specific)
Login as the root and open configuration file:
# vi /etc/security/limits.conf
Following will prevent a "fork bomb":
vivek hard nproc 300
@student hard nproc 50
@faculty soft nproc 100
@pusers hard nproc 200
Above will prevent anyone in the student group from having more than 50 processes, faculty and pusers group limit is set to 100 and 200. Vivek can create only 300 process. Please note that KDE and Gnome desktop system can launch many process.
Save and close the file. Test your new system by dropping a form bomb:
$ :(){ :|:& };:
Article printed from nixCraft: http://www.cyberciti.biz/tips
URL to article: http://www.cyberciti.biz/tips/linux-limiting-user-process.html
Rubick
Could you tell me what it’s the difference between hard and soft limits?
People told me soft is like warning and hard is real max limit, but I’m not sure
vivek
Yup, you are correct about soft and hard limit. For example, following will prevent anyone in the student group from having more than 50 processes, and a warning will be given at 30 processes.
@student soft nproc 30
@student hard nproc 50
Sergei Vasilyev
I wonder how to limit number of used cpu cores per user or per user process in case when process is multithreaded and server has multiply number of CPU.
Joshi
i think this can be done via: apt-get install cpulimit
Robert Delahunt
I don’t see any info for doing it without PAM, so here’s some info (for us Slackware people, etc, and others not using PAM):
Put this in /etc/profile.conf:
ulimit -u 100
where this is the limit of processes anyone can run. Be warned that it could cause problems if you don’t know how many typical processes you run, so play with ps aux | wc -l and other stuff to check how many you would need. Cheers
Posted By Vivek Gite On November 27, 2007 @ 5:28 pm In CentOS, Debian Linux, Howto, Linux, RedHat/Fedora Linux, Security | 13 Comments
[1]
Earlier, I wrote about fork bomb [2], few readers like to know about getting protection against such attacks:
How do I protect my system from a fork bomb under Linux?
Limiting user processes is important for running a stable system. To limit user process just add user name or group or all users to /etc/security/limits.conf file and impose process limitations.
Understanding /etc/security/limits.conf file
Each line describes a limit for a user in the form:
Where:
*
o an user name
o a group name, with @group syntax
o the wildcard *, for default entry
o the wildcard %, can be also used with %group syntax, for maxlogin limit
*
o "soft" for enforcing the soft limits
o "hard" for enforcing hard limits
*
o core - limits the core file size (KB)
*
o core - limits the core file size (KB)
o data - max data size (KB)
o fsize - maximum filesize (KB)
o memlock - max locked-in-memory address space (KB)
o nofile - max number of open files
o rss - max resident set size (KB)
o stack - max stack size (KB)
o cpu - max CPU time (MIN)
o nproc - max number of processes
o as - address space limit
o maxlogins - max number of logins for this user
o maxsyslogins - max number of logins on the system
o priority - the priority to run user process with
o locks - max number of file locks the user can hold
o sigpending - max number of pending signals
o msgqueue - max memory used by POSIX message queues (bytes)
o nice - max nice priority allowed to raise to
o rtprio - max realtime priority
o chroot - change root to directory (Debian-specific)
Login as the root and open configuration file:
# vi /etc/security/limits.conf
Following will prevent a "fork bomb":
vivek hard nproc 300
@student hard nproc 50
@faculty soft nproc 100
@pusers hard nproc 200
Above will prevent anyone in the student group from having more than 50 processes, faculty and pusers group limit is set to 100 and 200. Vivek can create only 300 process. Please note that KDE and Gnome desktop system can launch many process.
Save and close the file. Test your new system by dropping a form bomb:
$ :(){ :|:& };:
Article printed from nixCraft: http://www.cyberciti.biz/tips
URL to article: http://www.cyberciti.biz/tips/linux-limiting-user-process.html
Rubick
Could you tell me what it’s the difference between hard and soft limits?
People told me soft is like warning and hard is real max limit, but I’m not sure
vivek
Yup, you are correct about soft and hard limit. For example, following will prevent anyone in the student group from having more than 50 processes, and a warning will be given at 30 processes.
@student soft nproc 30
@student hard nproc 50
Sergei Vasilyev
I wonder how to limit number of used cpu cores per user or per user process in case when process is multithreaded and server has multiply number of CPU.
Joshi
i think this can be done via: apt-get install cpulimit
Robert Delahunt
I don’t see any info for doing it without PAM, so here’s some info (for us Slackware people, etc, and others not using PAM):
Put this in /etc/profile.conf:
ulimit -u 100
where this is the limit of processes anyone can run. Be warned that it could cause problems if you don’t know how many typical processes you run, so play with ps aux | wc -l and other stuff to check how many you would need. Cheers
Become Another User, with His Environment Variables
su -l
The su command only works if you know the password of the user. No password, no transformation. If it does work, you switch to the shell that the user has specified in the /etc/passwd file: sh, tcsh, or bash, for instance. Most Linux users just use the default bash shell, so you probably won't see any differences there. Notice also in the previous example that you didn't change directories when you changed users. In essence, you've become gromit, but you're still using scott's environment variables. It's as if you found Superman's suit and put it on. You might look like Superman (yeah, right!), but you wouldn't have any of his powers.
The way to fix that is to use the -l option (or --login).
$ ls
/home/scott/libby
$ whoami
scott
$ su -l gromit
Password:
$ whoami
gromit
$ ls
/home/gromit
Things look mostly the same as the "Become Another User" example, but things are very different behind the scenes. The fact that you're now in gromit's home directory should demonstrate that something has changed. The -l option tells su to use a login shell, as though gromit actually logged in to the machine. You're now gromit in name, but you're using gromit's environment variables, and you're in gromit's home directory (where gromit would find himself when he first logged in to this machine). It's as though putting on Superman's skivvies also gave you the ability to actually leap tall buildings in a single bound!
source: scott granneman
The su command only works if you know the password of the user. No password, no transformation. If it does work, you switch to the shell that the user has specified in the /etc/passwd file: sh, tcsh, or bash, for instance. Most Linux users just use the default bash shell, so you probably won't see any differences there. Notice also in the previous example that you didn't change directories when you changed users. In essence, you've become gromit, but you're still using scott's environment variables. It's as if you found Superman's suit and put it on. You might look like Superman (yeah, right!), but you wouldn't have any of his powers.
The way to fix that is to use the -l option (or --login).
$ ls
/home/scott/libby
$ whoami
scott
$ su -l gromit
Password:
$ whoami
gromit
$ ls
/home/gromit
Things look mostly the same as the "Become Another User" example, but things are very different behind the scenes. The fact that you're now in gromit's home directory should demonstrate that something has changed. The -l option tells su to use a login shell, as though gromit actually logged in to the machine. You're now gromit in name, but you're using gromit's environment variables, and you're in gromit's home directory (where gromit would find himself when he first logged in to this machine). It's as though putting on Superman's skivvies also gave you the ability to actually leap tall buildings in a single bound!
source: scott granneman
Become root, with Its Environment Variables
Become root, with Its Environment Variables
su -
Entering su all by its lonesome is equivalent to typing in su rootyou're root in name and power, but that's all. Behind the scenes, your non-root environment variables are still in place, as shown here:
$ ls
/home/scott/libby
$ whoami
scott
$ su
Password:
$ whoami
root
$ ls
/home/scott/libby
When you use su -, you not only become root, you also use root's environment variables.
$ ls
/home/scott/libby
$ whoami
scott
$ su -
Password:
$ whoami
root
$ ls
/root
Now that's better! Appending - after su is the same as su -l root, but requires less typing. You're root in name, power, and environment, which means you're fully root. To the computer, anything that root can do, you can do. Have fun with your superpowers, but remember that with great power comes great aw, you know how it ends.
source: scott grannemann
su -
Entering su all by its lonesome is equivalent to typing in su rootyou're root in name and power, but that's all. Behind the scenes, your non-root environment variables are still in place, as shown here:
$ ls
/home/scott/libby
$ whoami
scott
$ su
Password:
$ whoami
root
$ ls
/home/scott/libby
When you use su -, you not only become root, you also use root's environment variables.
$ ls
/home/scott/libby
$ whoami
scott
$ su -
Password:
$ whoami
root
$ ls
/root
Now that's better! Appending - after su is the same as su -l root, but requires less typing. You're root in name, power, and environment, which means you're fully root. To the computer, anything that root can do, you can do. Have fun with your superpowers, but remember that with great power comes great aw, you know how it ends.
source: scott grannemann
Accidentally deleted Gnome Panel
See:
http://ubuntuforums.org/showthread.php?p=6019718
http://ubuntuforums.org/showthread.php?t=135990
http://ubuntuforums.org/showthread.php?p=6019718
http://ubuntuforums.org/showthread.php?t=135990
Friday, February 13, 2009
shred command
shred command
.............
syntax:- shred [OPTIONS] FILE [...]
Ex:-
$ echo "Sensitive data" > file
$ cat file
Sensitive data
$ shred file
$ ls file
file
$ cat file
Ò´¿(j}yãÒÒÁXp|ÄþÅJ]vìâ£íÕ!¸`ÓçÚá/é²c\§øn
cí%±0zÖTt¯É
¤~Q£_,§Àý?ÎO|Ù{>A0æä~Ë«Á@¾p^ÈÅáÜyÌ¡èÂ$®5Í^8fµ
4ÒWc@!-5üÁ%¨çN!"R
Îo8{³FI¸* \¨ç´
àÀTÛ^
WÑ8ÇkÇRá3¯çz\[ÔhB®ÙºÉ%lk @°pÅ%F ¾áDcmÃïÿfG]5Ýiû²
$ rm file
* shred a hard disk
shred /dev/sdx
illtbagu
usually i just shred -v -f -u -z if its a file, but if its a folder i cant do this, it doesnt allow me to shred from the command prompt if its a folder. how can i shred a folder form the command prompt.
[aday@schrock321 AD]$ shred -v -f -u -z '/home/AD/Desktop/Trash/joe2'
shred: /home/AD/Desktop/Trash/joe2: Is a directory
[aday@schrock321 AD]$
jailbait
Shredding overwrites a file's contents and then deletes the file. Since a folder has no contents then shredding a folder is meaningless. Just delete the folder.
illtbagu
ahhhh missunderstood. i have a folder with files in it. i dont care about the folder. i dont want to cd into every sub folder just so i can shred every thing in its contents. yes i know i could just use the shred from my file manager (which by the way shreds the entire contents of the folder) but i would like to do this from the console. thats what linux is about right power from both the command line and GUI. but the GUI in this case is allowing me to shred a folders entire contents while the console is not letting me.
looking back at my post i said this completely wrong.
jailbait
I tried several experiments and you are right about the way shred works. It seems to insist on working on only one file per command. I did come up with the following command. Some variation of the following command will probably do what you want:
find /home/AD/Desktop/Trash/joe2 -iname "*" -exec shred {} -v -f -u -z \;
illtbagu
hey jailbait thanks that worked great. i have got to do some studying on scripting. i just thought it too be kind of silly that shred didnt have the option though.
gzip allows you to do this
-r --recursive operate recursively on directories
i was also supprised that i coulnt find where someone had asked this question before.
sir woofy
I just added ;rm -rf /home/AD/Desktop/Trash/joe2/ to the end of it to del all folders so
find /home/AD/Desktop/Trash/joe2 -iname "*" -exec shred {} -v -f -u -z \;;rm -rf /home/AD/Desktop/Trash/joe2/
but am I right in saying that the folder names could still be recoverable?
jailbait
"am I right in saying that the folder names could still be recoverable?"
Yes, you might use the mv command to rename the directories to something meaningless before you delete the directories.
.............
syntax:- shred [OPTIONS] FILE [...]
Ex:-
$ echo "Sensitive data" > file
$ cat file
Sensitive data
$ shred file
$ ls file
file
$ cat file
Ò´¿(j}yãÒÒÁXp|ÄþÅJ]vìâ£íÕ!¸`ÓçÚá/é²c\§øn
cí%±0zÖTt¯É
¤~Q£_,§Àý?ÎO|Ù{>A0æä~Ë«Á@¾p^ÈÅáÜyÌ¡èÂ$®5Í^8fµ
4ÒWc@!-5üÁ%¨çN!"R
Îo8{³FI¸* \¨ç´
àÀTÛ^
WÑ8ÇkÇRá3¯çz\[ÔhB®ÙºÉ%lk @°pÅ%F ¾áDcmÃïÿfG]5Ýiû²
$ rm file
* shred a hard disk
shred /dev/sdx
illtbagu
usually i just shred -v -f -u -z if its a file, but if its a folder i cant do this, it doesnt allow me to shred from the command prompt if its a folder. how can i shred a folder form the command prompt.
[aday@schrock321 AD]$ shred -v -f -u -z '/home/AD/Desktop/Trash/joe2'
shred: /home/AD/Desktop/Trash/joe2: Is a directory
[aday@schrock321 AD]$
jailbait
Shredding overwrites a file's contents and then deletes the file. Since a folder has no contents then shredding a folder is meaningless. Just delete the folder.
illtbagu
ahhhh missunderstood. i have a folder with files in it. i dont care about the folder. i dont want to cd into every sub folder just so i can shred every thing in its contents. yes i know i could just use the shred from my file manager (which by the way shreds the entire contents of the folder) but i would like to do this from the console. thats what linux is about right power from both the command line and GUI. but the GUI in this case is allowing me to shred a folders entire contents while the console is not letting me.
looking back at my post i said this completely wrong.
jailbait
I tried several experiments and you are right about the way shred works. It seems to insist on working on only one file per command. I did come up with the following command. Some variation of the following command will probably do what you want:
find /home/AD/Desktop/Trash/joe2 -iname "*" -exec shred {} -v -f -u -z \;
illtbagu
hey jailbait thanks that worked great. i have got to do some studying on scripting. i just thought it too be kind of silly that shred didnt have the option though.
gzip allows you to do this
-r --recursive operate recursively on directories
i was also supprised that i coulnt find where someone had asked this question before.
sir woofy
I just added ;rm -rf /home/AD/Desktop/Trash/joe2/ to the end of it to del all folders so
find /home/AD/Desktop/Trash/joe2 -iname "*" -exec shred {} -v -f -u -z \;;rm -rf /home/AD/Desktop/Trash/joe2/
but am I right in saying that the folder names could still be recoverable?
jailbait
"am I right in saying that the folder names could still be recoverable?"
Yes, you might use the mv command to rename the directories to something meaningless before you delete the directories.
logwatch utility
see a report of all unauthorized sudo attempts
logwatch --print --service sudo --range all
################### Logwatch 7.3.6 (05/19/07) ####################
Processing Initiated: Fri Feb 13 14:34:36 2009
Date Range Processed: all
Detail Level of Output: 5
Type of Output: unformatted
Logfiles for Host: ubuntu
##################################################################
--------------------- Sudo (secure-log) Begin ------------------------
==============================================================================
root => root
------------
/bin/sh - 31 Times.
root => user
------------
/usr/bin/gconftool - 66 Times.
==============================================================================
user => root
------------
/bin/bash - 1 Times.
/bin/cat - 1 Times.
/bin/chmod - 2 Times.
/bin/sh - 1 Times.
/etc/init.d/apache2 - 4 Times.
/etc/init.d/networking - 1 Times.
/sbin/fdisk - 1 Times.
/sbin/init - 1 Times.
/usr/bin/apt-get - 16 Times.
/usr/bin/at - 11 Times.
/usr/bin/find - 12 Times.
/usr/bin/gedit - 10 Times.
/usr/bin/ldd - 2 Times.
/usr/bin/lsb_release - 1 Times.
/usr/bin/myisamchk - 4 Times.
/usr/bin/nautilus - 7 Times.
/usr/bin/passwd - 1 Times.
/usr/sbin/ethtool - 4 Times.
/usr/sbin/synaptic - 7 Times.
/usr/sbin/tcpdump - 3 Times.
To see only yesterdays entries:-
logwatch --print | less
To see all useful data logwatch can display
logwatch --range all --archives --detail High --print|less
see:-
https://help.ubuntu.com/community/Logwatch
http://www.logwatch.org
logwatch --print --service sudo --range all
################### Logwatch 7.3.6 (05/19/07) ####################
Processing Initiated: Fri Feb 13 14:34:36 2009
Date Range Processed: all
Detail Level of Output: 5
Type of Output: unformatted
Logfiles for Host: ubuntu
##################################################################
--------------------- Sudo (secure-log) Begin ------------------------
==============================================================================
root => root
------------
/bin/sh - 31 Times.
root => user
------------
/usr/bin/gconftool - 66 Times.
==============================================================================
user => root
------------
/bin/bash - 1 Times.
/bin/cat - 1 Times.
/bin/chmod - 2 Times.
/bin/sh - 1 Times.
/etc/init.d/apache2 - 4 Times.
/etc/init.d/networking - 1 Times.
/sbin/fdisk - 1 Times.
/sbin/init - 1 Times.
/usr/bin/apt-get - 16 Times.
/usr/bin/at - 11 Times.
/usr/bin/find - 12 Times.
/usr/bin/gedit - 10 Times.
/usr/bin/ldd - 2 Times.
/usr/bin/lsb_release - 1 Times.
/usr/bin/myisamchk - 4 Times.
/usr/bin/nautilus - 7 Times.
/usr/bin/passwd - 1 Times.
/usr/sbin/ethtool - 4 Times.
/usr/sbin/synaptic - 7 Times.
/usr/sbin/tcpdump - 3 Times.
To see only yesterdays entries:-
logwatch --print | less
To see all useful data logwatch can display
logwatch --range all --archives --detail High --print|less
see:-
https://help.ubuntu.com/community/Logwatch
http://www.logwatch.org
Thursday, February 12, 2009
command to know current net speed
whenever i download files using wget it will show download speed as 60kbps etc.
now what is the command to get current net speed without downloading any file ?
i tried the following
sudo ethtool -S eth0
[sudo] password for user:
NIC statistics:
early_rx: 0
tx_buf_mapped: 0
tx_timeouts: 0
rx_lost_in_ring: 0
but still i am not getting the speed in kbps ???
david schwarz
It is not possible to determine the speed any way other than trying it and seeing. There is simply no way to know what will be the limiting factor.
david
That's easy: if you're not transmitting or receving anything, your speed is 0 kbps.
If you are trying to figure out what the NIC is set to (10Mbps, 100Mbps,etc.) that information is shown by ethtool.
Maxwell lol
echo 0
(if you aren't downloading, then you aren't using the network).You could use a sniffer and calculate the current rate for existing traffic.
wolfang draxinger
If there are 0 bytes transferred per second, then the speed is 0 bits/s. So there must be some data transferred to have something to measure.
If you just want to know the bandwidth utilized, without wasting bandwidth,then nstat, or dstat might be the right tool for you. But again: If there's nothing transmitted, there's nothing to measure.
If you want to know the bandwidth limit of your LAN connection, ethtool will show you some value close to the actually available bandwidth.
If you want to know the bandwidth to/from your ISP, then you,ve to measure. DSL connections e.g. will always run on the highest bandwidth the telephone line supports. Any internet bandwidth limit is imposed by the ISP through traffic shaping, which can only be measured, by transmitting data.
repo
The only thing I can think of, is to look for the speed of the line in the router, My router connects at 12000 kbps, and gives me a download speed of 10500 kbps idem for upload Off course you need to download/upload something in order to see the actual speed. However, your speed is limited by the speed of the other site.
1PW
A chain is as weak as its weakest link.Your system could be connected to the fastest service money can buy. However, if the site you're downloading from is served by a lesser service, that is one of many impediments. Internet congestion is another.
Allen kistler
"ethtool -d" will dump the registers for the interface. Depending on the hardware driver, the link speed may be decodeable. (Not all hardware and drivers support setting and getting the speed.) Be sure to look for the actual link speed, not the capability of the
interface. (That is, a 1G card may be at 100M because that's what it negotiated with the switch.)
As others have pointed out, your speed to the switch is probably not your available bandwidth to any given server. The slowest link and the amount of congestion between client and server will dictate the effective bandwidth.
Neural OD
sudo apt-get install iftop
sanemanmad
Are you wanting to know your download speeds from the WWW, or from your Home LAN? If the first is the case then wget is probably accurate, however, you must take in account the network load or your routers capabilities. Try performing a speed test at a website that tests down/up speeds.
One way to be more precise, If you have cable or dsl, then you should be able to connect directly to the modem via cat5e and attempt wget.
perform a wget from a known website, ie, sourceforge, download.com.
Also GOOGLE is your friend, In my case I usually google what i am looking for ex (DWA-556 Atheros wireless card ubuntu) and it usually points here anyway
Neural -Od
wasn't thinking too clearly - sorry - I had discovered another tool a while ago that works great - bwm-ng. it's in the repos
ragestar
Heck,apt-get install iptraf
now what is the command to get current net speed without downloading any file ?
i tried the following
sudo ethtool -S eth0
[sudo] password for user:
NIC statistics:
early_rx: 0
tx_buf_mapped: 0
tx_timeouts: 0
rx_lost_in_ring: 0
but still i am not getting the speed in kbps ???
david schwarz
It is not possible to determine the speed any way other than trying it and seeing. There is simply no way to know what will be the limiting factor.
david
That's easy: if you're not transmitting or receving anything, your speed is 0 kbps.
If you are trying to figure out what the NIC is set to (10Mbps, 100Mbps,etc.) that information is shown by ethtool.
Maxwell lol
echo 0
(if you aren't downloading, then you aren't using the network).You could use a sniffer and calculate the current rate for existing traffic.
wolfang draxinger
If there are 0 bytes transferred per second, then the speed is 0 bits/s. So there must be some data transferred to have something to measure.
If you just want to know the bandwidth utilized, without wasting bandwidth,then nstat, or dstat might be the right tool for you. But again: If there's nothing transmitted, there's nothing to measure.
If you want to know the bandwidth limit of your LAN connection, ethtool will show you some value close to the actually available bandwidth.
If you want to know the bandwidth to/from your ISP, then you,ve to measure. DSL connections e.g. will always run on the highest bandwidth the telephone line supports. Any internet bandwidth limit is imposed by the ISP through traffic shaping, which can only be measured, by transmitting data.
repo
The only thing I can think of, is to look for the speed of the line in the router, My router connects at 12000 kbps, and gives me a download speed of 10500 kbps idem for upload Off course you need to download/upload something in order to see the actual speed. However, your speed is limited by the speed of the other site.
1PW
A chain is as weak as its weakest link.Your system could be connected to the fastest service money can buy. However, if the site you're downloading from is served by a lesser service, that is one of many impediments. Internet congestion is another.
Allen kistler
"ethtool -d
interface. (That is, a 1G card may be at 100M because that's what it negotiated with the switch.)
As others have pointed out, your speed to the switch is probably not your available bandwidth to any given server. The slowest link and the amount of congestion between client and server will dictate the effective bandwidth.
Neural OD
sudo apt-get install iftop
sanemanmad
Are you wanting to know your download speeds from the WWW, or from your Home LAN? If the first is the case then wget is probably accurate, however, you must take in account the network load or your routers capabilities. Try performing a speed test at a website that tests down/up speeds.
One way to be more precise, If you have cable or dsl, then you should be able to connect directly to the modem via cat5e and attempt wget.
perform a wget from a known website, ie, sourceforge, download.com.
Also GOOGLE is your friend, In my case I usually google what i am looking for ex (DWA-556 Atheros wireless card ubuntu) and it usually points here anyway
Neural -Od
wasn't thinking too clearly - sorry - I had discovered another tool a while ago that works great - bwm-ng. it's in the repos
ragestar
Heck,apt-get install iptraf
/usr/local/bin
how valid s the following claim that,
if you want the script to be executable by others, you could use /usr/local/bin or another system as a convenient location for adding new programs , which i have read in the book "Beginning Linux Programming" by Neil Mathews
glennsperf
Hi, with Mandriva /usr/local/bin is where user installed packages from source binaries are installed to. Like if i want the latest perl, qt4, python, etc that are not available from my normal repo Then the make program installs them here.
The linker (ld) sets up the appropriate links so if I call the program /usr/local/bin is checked as well as the /usr/bin. You can do it manually, by creating links from /usr/local/bin to /usr/bin
There is probably a neater way of doing this, but generally the make install program does it automagically.
if you want the script to be executable by others, you could use /usr/local/bin or another system as a convenient location for adding new programs , which i have read in the book "Beginning Linux Programming" by Neil Mathews
glennsperf
Hi, with Mandriva /usr/local/bin is where user installed packages from source binaries are installed to. Like if i want the latest perl, qt4, python, etc that are not available from my normal repo Then the make program installs them here.
The linker (ld) sets up the appropriate links so if I call the program /usr/local/bin is checked as well as the /usr/bin. You can do it manually, by creating links from /usr/local/bin to /usr/bin
There is probably a neater way of doing this, but generally the make install program does it automagically.
Wednesday, February 11, 2009
GREP examples
grep [options] PATTERN [FILES]
Option Meaning
-c print a count of the number of lines that match
-E turn on extended expressions
-h suppress the normal prefixing of each o/p line
with the name of the file it was found in
-i ignore case
-l list the name of the files with matching lines,
don't o/p the actual matched lines
-v invert the matching pattern to select non matching lines
rather than matching lines
Ex:- grep e$ *
to look for files in the current directory that have lines that end with letter e.
Ex:- grep a[[:blank:]] *
we want to find lines with words that end with a in all the files in current directory
Ex:- grep Th.[[:space:]] *
to find three - letter words that start with Th, in all the files in current directory
Ex:- grep -E [a-z]\{10\} *
To use the extended grep mode to search for lowercase words that are exactly 10 characters long
EX: Recursively check a folder to find the given text using grep
grep -R "text" *
Option Meaning
-c print a count of the number of lines that match
-E turn on extended expressions
-h suppress the normal prefixing of each o/p line
with the name of the file it was found in
-i ignore case
-l list the name of the files with matching lines,
don't o/p the actual matched lines
-v invert the matching pattern to select non matching lines
rather than matching lines
Ex:- grep e$ *
to look for files in the current directory that have lines that end with letter e.
Ex:- grep a[[:blank:]] *
we want to find lines with words that end with a in all the files in current directory
Ex:- grep Th.[[:space:]] *
to find three - letter words that start with Th, in all the files in current directory
Ex:- grep -E [a-z]\{10\} *
To use the extended grep mode to search for lowercase words that are exactly 10 characters long
EX: Recursively check a folder to find the given text using grep
grep -R "text" *
Tuesday, February 10, 2009
Friday, February 6, 2009
Monday, February 2, 2009
nsswitch.conf and host.conf differences
peter72
Have a network name resolution question.
What is the difference between the resolution name path in the /etc/host.conf and in the /etc/nsswitch.conf?
I know nsswitch.conf, from my solaris days, and have never even used the host.conf. How can you have 2 different files for where to look for name resolution?
emailssent
Here is solution for u,
The host.conf file is one of the configuration files used to set the order of precedence among the various name services. The host.conf file defines serveral options that control how the /etc/hosts file is processed and how it interacts with DNS.
The nsswitch.conf file handles much more than just the order of precedence b/w the host table and DNS. It defines the sources for several different system administration databases becz. it is an outgrowth of the NIS.
The nsswitch.conf file has superseded the host.conf file becz. it provides more control over more resources. Linux systems generally have both files configured , but the action takes place in the nsswitch.conf file
Now the difference,
host.conf file is an older file used for order of precendece among various name services. whereas nsswitch.conf file is newer one as compared to host.conf file.host.conf is an old configuration file that does some of what nsswitch.conf does a nd is still in use.So, question is why host.conf file is present in system when nsswithc does everthing becz as of old architecture follows host.conf file , today also some system first looks for host.conf.
For more info and clarity google for it.Source from where i am answering this is Craig Hunt book on DNS server. Above is all as far as i am remembering.
Have a network name resolution question.
What is the difference between the resolution name path in the /etc/host.conf and in the /etc/nsswitch.conf?
I know nsswitch.conf, from my solaris days, and have never even used the host.conf. How can you have 2 different files for where to look for name resolution?
emailssent
Here is solution for u,
The host.conf file is one of the configuration files used to set the order of precedence among the various name services. The host.conf file defines serveral options that control how the /etc/hosts file is processed and how it interacts with DNS.
The nsswitch.conf file handles much more than just the order of precedence b/w the host table and DNS. It defines the sources for several different system administration databases becz. it is an outgrowth of the NIS.
The nsswitch.conf file has superseded the host.conf file becz. it provides more control over more resources. Linux systems generally have both files configured , but the action takes place in the nsswitch.conf file
Now the difference,
host.conf file is an older file used for order of precendece among various name services. whereas nsswitch.conf file is newer one as compared to host.conf file.host.conf is an old configuration file that does some of what nsswitch.conf does a nd is still in use.So, question is why host.conf file is present in system when nsswithc does everthing becz as of old architecture follows host.conf file , today also some system first looks for host.conf.
For more info and clarity google for it.Source from where i am answering this is Craig Hunt book on DNS server. Above is all as far as i am remembering.
Port forwarding
See: http://www.debuntu.org/2006/04/08/22-ssh-and-port-forwarding-or-how-to-get-through-a-firewall
cron command
the name of the cron daemon is crond.
crontab Entries
A crontab entry has six fields: the first five are used to specify the time for an action, while
the last field is the action itself.
• The first field specifies minutes (0–59).
• The second field specifies the hour (0–23).
• The third field specifies the day of the month (1–31).
• The fourth field specifies the month of the year (1–12, or month prefixes like Jan and Sep).
• The fifth field specifies the day of the week (0–6, or day prefixes like Wed and Fri), starting with 0 as Sunday.
syntax: minute hour day-month month day(s)-week task
Ex:
0 2 * * 1-5 tar cf /home/backp /home/projects
0 2 * * Mon-Fri tar cf /home/backp /home/projects
0 2 * * 0,3,5 tar cf /home/backp /home/projects
0 2 * * Mon-Fri tar cf /home/backp /home/projects
The cron.d Directory
On a heavily used system, the /etc/crontab file can easily become crowded. There may also be instances in which certain entries require different variables. For example, you may need to run some task under a different shell. To help you organize your crontab tasks, you can place crontab entries in files within the cron.d directory. The files in the cron.d directory all contain crontab entries of the same format as /etc/crontab. They may be given any name. They are treated as added crontab files, with cron checking them for tasks to run.
The crontab command
The crontab command takes the contents of the text file and creates a crontab file in the /var/spool/cron directory, adding the name of the user who issued the command. In the following example, the root user installs the contents of mycronfile as the root’s crontab file:
sudo crontab mycronfile
This creates a file called /var/spool/cron/root. If a user named justin installs a crontab file, it creates a file called /var/spool/cron/justin. You can control use of the crontab command by regular users with the /etc/cron.allow file. Only users whose names appear in this file can create crontab files of their own. Conversely, the /etc/cron.deny file lists those users denied use of the cron tool, preventing them for scheduling tasks. If neither file exists, access is denied to all users. If a user is not in an /etc/cron.allow file, access is denied. However, if the /etc/cron.allow file does not exist, and the /etc/cron.deny file does, then all users not listed in /etc/cron.deny are automatically allowed access.
Editing in cron
Never try to edit your crontab file directly. Instead, use the crontab command with the -e option. This opens your crontab file in the /var/spool/cron directory with a standard text
editor, such as Vi (crontab uses the default editor as specified by the EDITOR shell environment variable).

crontab Entries
A crontab entry has six fields: the first five are used to specify the time for an action, while
the last field is the action itself.
• The first field specifies minutes (0–59).
• The second field specifies the hour (0–23).
• The third field specifies the day of the month (1–31).
• The fourth field specifies the month of the year (1–12, or month prefixes like Jan and Sep).
• The fifth field specifies the day of the week (0–6, or day prefixes like Wed and Fri), starting with 0 as Sunday.
syntax: minute hour day-month month day(s)-week task
Ex:
0 2 * * 1-5 tar cf /home/backp /home/projects
0 2 * * Mon-Fri tar cf /home/backp /home/projects
0 2 * * 0,3,5 tar cf /home/backp /home/projects
0 2 * * Mon-Fri tar cf /home/backp /home/projects
The cron.d Directory
On a heavily used system, the /etc/crontab file can easily become crowded. There may also be instances in which certain entries require different variables. For example, you may need to run some task under a different shell. To help you organize your crontab tasks, you can place crontab entries in files within the cron.d directory. The files in the cron.d directory all contain crontab entries of the same format as /etc/crontab. They may be given any name. They are treated as added crontab files, with cron checking them for tasks to run.
The crontab command
The crontab command takes the contents of the text file and creates a crontab file in the /var/spool/cron directory, adding the name of the user who issued the command. In the following example, the root user installs the contents of mycronfile as the root’s crontab file:
sudo crontab mycronfile
This creates a file called /var/spool/cron/root. If a user named justin installs a crontab file, it creates a file called /var/spool/cron/justin. You can control use of the crontab command by regular users with the /etc/cron.allow file. Only users whose names appear in this file can create crontab files of their own. Conversely, the /etc/cron.deny file lists those users denied use of the cron tool, preventing them for scheduling tasks. If neither file exists, access is denied to all users. If a user is not in an /etc/cron.allow file, access is denied. However, if the /etc/cron.allow file does not exist, and the /etc/cron.deny file does, then all users not listed in /etc/cron.deny are automatically allowed access.
Editing in cron
Never try to edit your crontab file directly. Instead, use the crontab command with the -e option. This opens your crontab file in the /var/spool/cron directory with a standard text
editor, such as Vi (crontab uses the default editor as specified by the EDITOR shell environment variable).

/etc/udev/rules.d
Device files are no longer handled in a static way; they are now dynamically generated as needed. Previously a device file was created for each possible device, leading to a very large number of device files in the /etc/dev directory. Now your system detects only those devices it uses and creates device files for them, resulting in a much smaller listing of device files.
The tool used to detect and generate device files is udev, user devices. Each time your system
is booted, udev will automatically detect your devices and generate device files for them in the /etc/dev directory. This means that the /etc/dev directory and its files are re-created each time you boot. It is a dynamic directory, no longer static. To manage these device files, you need to use udev configuration files located in the /etc/udev directory. This means that udev is also able to manage all removable devices dynamically; udev will generate and configure device files for removable devices as they are attached and then remove these files when the devices are removed. In this sense, all devices are now considered hotplugged, with fixed devices simply being hotplugged devices that are never removed.
As /etc/dev is now dynamic, any changes you would make manually to the /etc/dev directory will be lost when you reboot. This includes the creation of any symbolic links such as /dev/cdrom that many software applications use. Instead, such symbolic links have to be configured using udev rules listed in configuration files located in the /etc/udev/rules.d directory. Default rules are already in place for symbolic links, but you can create rules of your own.
The tool used to detect and generate device files is udev, user devices. Each time your system
is booted, udev will automatically detect your devices and generate device files for them in the /etc/dev directory. This means that the /etc/dev directory and its files are re-created each time you boot. It is a dynamic directory, no longer static. To manage these device files, you need to use udev configuration files located in the /etc/udev directory. This means that udev is also able to manage all removable devices dynamically; udev will generate and configure device files for removable devices as they are attached and then remove these files when the devices are removed. In this sense, all devices are now considered hotplugged, with fixed devices simply being hotplugged devices that are never removed.
As /etc/dev is now dynamic, any changes you would make manually to the /etc/dev directory will be lost when you reboot. This includes the creation of any symbolic links such as /dev/cdrom that many software applications use. Instead, such symbolic links have to be configured using udev rules listed in configuration files located in the /etc/udev/rules.d directory. Default rules are already in place for symbolic links, but you can create rules of your own.
df command
The df command reports file system disk space usage. It lists all your file systems by their device names, how much disk space they take up, and the percentage of the disk space used, as well as where they are mounted. With the -h option, it displays information in a more readable format, such as measuring disk space in megabytes instead of memory blocks. The df command is also a safe way to obtain a listing of all your partitions, instead of using fdisk (because with fdisk you can erase partitions). df shows only mounted partitions, however, whereas fdisk shows all partitions. Here’s an example:

You can also use df to tell you to what file system a given directory belongs. Enter df
with the directory name or df . for the current directory:
df .
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 76058116 10021764 62203248 14% /
mount command
syntax: mount [options] device mountpoint/directory
Ex:
sudo mount /dev/hdc2 /mymedia
# mount -t ext3 /dev/hda4 /mnt/mydata
auto option
If you are unsure about the type of file system that a disk holds, you can mount it specifying the auto file system type with the -t option. Given the auto file system type,
mount attempts to detect the type of file system on the disk automatically. This is useful if you are manually mounting a floppy disk whose file system type you are unsure of (HAL
also automatically detects the file system type of any removable media, including floppies).
Here’s an example: mount -t auto /dev/fd0 /media/floppy
Mounting DVD/CD Disc Images
Mounting a DVD/CD disc image is also performed with the mount command, but it requires the use of a loop device. Specify the loop device with the loop option as shown in
the next example. Here the mydoc.iso is mounted to the /media/cdrom directory as a file system of type iso9660. Be sure to unmount it when you finish. The image can be
mounted to an empty directory on your system.
mount -t iso9660 -o ro,loop=/dev/loop0 mydocuments.iso /media/mycdrom

Ex:
sudo mount /dev/hdc2 /mymedia
# mount -t ext3 /dev/hda4 /mnt/mydata
auto option
If you are unsure about the type of file system that a disk holds, you can mount it specifying the auto file system type with the -t option. Given the auto file system type,
mount attempts to detect the type of file system on the disk automatically. This is useful if you are manually mounting a floppy disk whose file system type you are unsure of (HAL
also automatically detects the file system type of any removable media, including floppies).
Here’s an example: mount -t auto /dev/fd0 /media/floppy
Mounting DVD/CD Disc Images
Mounting a DVD/CD disc image is also performed with the mount command, but it requires the use of a loop device. Specify the loop device with the loop option as shown in
the next example. Here the mydoc.iso is mounted to the /media/cdrom directory as a file system of type iso9660. Be sure to unmount it when you finish. The image can be
mounted to an empty directory on your system.
mount -t iso9660 -o ro,loop=/dev/loop0 mydocuments.iso /media/mycdrom

Sunday, February 1, 2009
dpkg-query
dpkg-query -l
On the command line (terminal window), use dpkg-query with the -l option to list all your packages
Use the -L option to list only the files that a package has installed: dpkg-query -L wine
To see the status information about a package, including its dependencies and configuration files, use the -s option. Fields will include Status, Section, Architecture, Version, Depends
(dependent packages), Suggests, Conflicts (conflicting packages), and Conffiles (configuration files).
Ex: dpkg-query -s wine
Use the -S option to determine to which package a particular file belongs:
dpkg-query -S filename
On the command line (terminal window), use dpkg-query with the -l option to list all your packages
Use the -L option to list only the files that a package has installed: dpkg-query -L wine
To see the status information about a package, including its dependencies and configuration files, use the -s option. Fields will include Status, Section, Architecture, Version, Depends
(dependent packages), Suggests, Conflicts (conflicting packages), and Conffiles (configuration files).
Ex: dpkg-query -s wine
Use the -S option to determine to which package a particular file belongs:
dpkg-query -S filename
values from linux commands
If you place a Linux command within back quotes (`) on the command line, that command is first executed and its result becomes an argument on the command line
Ex:
$ listc=`ls *.pl`
$ echo $listc
Keep in mind the difference between single quotes and back quotes. Single quotes treat a Linux command as a set of characters. Back quotes force execution of the Linux command.
Ex:
$ listc=`ls *.pl`
$ echo $listc
Keep in mind the difference between single quotes and back quotes. Single quotes treat a Linux command as a set of characters. Back quotes force execution of the Linux command.
noclobber option
The redirection operation creates the new destination file. If the file already exists, it will be overwritten with the data in the standard output. You can set the noclobber feature to
prevent overwriting an existing file with the redirection operation. In this case, the redirection operation on an existing file will fail. You can overcome the noclobber feature by placing an
exclamation point after the redirection operator. You can place the noclobber command in a shell configuration file to make it an automatic default operation (see Chapter 11). The next
example sets the noclobber feature for the BASH shell and then forces the overwriting of the oldletter file if it already exists:
$ set -o noclobber
$ cat myletter >! oldletter
prevent overwriting an existing file with the redirection operation. In this case, the redirection operation on an existing file will fail. You can overcome the noclobber feature by placing an
exclamation point after the redirection operator. You can place the noclobber command in a shell configuration file to make it an automatic default operation (see Chapter 11). The next
example sets the noclobber feature for the BASH shell and then forces the overwriting of the oldletter file if it already exists:
$ set -o noclobber
$ cat myletter >! oldletter
/etc/inputrc
The actual associations of keys and their tasks,along with global settings, are specified in the /etc/inputrc file. The editing capabilities of the BASH shell command line are provided by Readline, which supports numerous editing operations. You can even bind a key to a selected editing operation.Readline uses the /etc/inputrc file to configure key bindings. This file is read automatically by your /etc/profile shell configuration file when you log in . You can customize your editing commands by creating an .inputrc file in your home directory (this is a dot file). It
may be best first to copy the /etc/inputrc file as your .inputrc file and then edit it. /etc/profile will first check for a local .inputrc file before accessing the /etc/inputrc file. You can find out more about Readline in the BASH shell reference manual at www.gnu.org/software/bash/manual/.
NB : verified on ubuntu 8.04.1
may be best first to copy the /etc/inputrc file as your .inputrc file and then edit it. /etc/profile will first check for a local .inputrc file before accessing the /etc/inputrc file. You can find out more about Readline in the BASH shell reference manual at www.gnu.org/software/bash/manual/.
NB : verified on ubuntu 8.04.1
Subscribe to:
Comments (Atom)






