tar cjvf myfiles.tar.bz2 *.txt
tar czvf myfiles.tar.gz *.txt
tar cf filename.tar dirname
or
cd into directory tar cvf filename.tar *
tar cvf /dev/fd0 /usr/src /etc /home
home dir back up
cd /home
sudo tar czvf /tmp/homedir.tgz *
sudo tar czvf /tmp/homedir.tgz * --newer "2006-06-23"
...............................................................
tar -jxvf file.tar.bz2
tar -zxvf file.tar.gz -C /tmp
Thursday, October 30, 2008
dd command examples
use dd to create file of any size
dd if=/dev/zero of=testfile_10MB bs=10485760 count=1
Backup Entire Harddisk to an Image
Backup content of the harddrive to a file. Creates an image of the drive
dd if=/dev/sda of=/tmp/sda.iso
back hard disk to remote host
dd bs=1M if=/dev/hda | gzip | ssh user@ip_addr 'dd of=hda.gz'
dd if=/dev/zero of=testfile_10MB bs=10485760 count=1
Backup Entire Harddisk to an Image
Backup content of the harddrive to a file. Creates an image of the drive
dd if=/dev/sda of=/tmp/sda.iso
back hard disk to remote host
dd bs=1M if=/dev/hda | gzip | ssh user@ip_addr 'dd of=hda.gz'
Wednesday, October 29, 2008
pdfimages: Extract and Save Images From A Portable Document Format ( PDF ) File
Posted By vivek On August 28, 2008 @ 8:36 pm In BASH Shell, CentOS, Debian / Ubuntu, Linux, Linux / UNIX File Formats, Package management, RedHat and Friends, Suse, UNIX, Ubuntu Linux | No Comments
Q. How do I extract images from a PDF file under Linux / UNIX shell account?
A. pdfimages works as Portable Document Format (PDF) image extractor under Linux / UNIX operating systems. It saves images from a PDF file as Portable Pixmap (PPM), Portable Bitmap (PBM), or JPEG files. Pdfimages reads the PDF file PDF-file, scans one or more pages, and writes one PPM, PBM, or JPEG file for each image, image-root-nnn.xxx, where nnn is the image number and xxx is the image type (.ppm, .pbm, .jpg).
pdfimages is installed using poppler-utils package under various Linux distributions:
# yum install poppler-utils
OR
# apt-get install poppler-utils
pdfimages syntax
pdfimages /path/to/file.pdf /path/to/output/dir
Extract the PDF file called bar.pdf and save every image as image-00{1,2,3..N}.ppm, enter:
$ pdfimages bar.pdf /tmp/image
$ ls /tmp/image*
Sample output:
image-000.ppm image-1025.ppm image-1140.ppm image-1256.ppm image-247.ppm image-374.ppm image-501.ppm image-628.ppm image-755.ppm image-882.ppm
image-001.ppm image-1026.ppm image-1141.ppm image-1257.ppm image-248.ppm image-375.ppm image-502.ppm image-629.ppm image-756.ppm image-883.ppm
image-002.ppm image-1027.ppm image-1142.ppm image-1258.ppm image-249.ppm image-376.ppm image-503.ppm image-630.ppm image-757.ppm image-884.ppm
Normally, all images are written as PBM (for monochrome images) or PPM (for non-monochrome images) files. With the -j option, images in DCT format are saved as JPEG files. All non-DCT images are saved in PBM/PPM format as usual:
$ pdfimages -j bar.pdf /tmp/image
The -f option Specifies the first page to scan. To scan first 5 pages, enter:
$ pdfimages -j -f 5 bar.pdf /tmp/image
The -l option specifies the last page to scan. To scan last 5 pages, enter:
$ pdfimages -j -l 5 bar.pdf /tmp/image
URL to article: http://www.cyberciti.biz/faq/easily-extract-images-from-pdf-file/
Q. How do I extract images from a PDF file under Linux / UNIX shell account?
A. pdfimages works as Portable Document Format (PDF) image extractor under Linux / UNIX operating systems. It saves images from a PDF file as Portable Pixmap (PPM), Portable Bitmap (PBM), or JPEG files. Pdfimages reads the PDF file PDF-file, scans one or more pages, and writes one PPM, PBM, or JPEG file for each image, image-root-nnn.xxx, where nnn is the image number and xxx is the image type (.ppm, .pbm, .jpg).
pdfimages is installed using poppler-utils package under various Linux distributions:
# yum install poppler-utils
OR
# apt-get install poppler-utils
pdfimages syntax
pdfimages /path/to/file.pdf /path/to/output/dir
Extract the PDF file called bar.pdf and save every image as image-00{1,2,3..N}.ppm, enter:
$ pdfimages bar.pdf /tmp/image
$ ls /tmp/image*
Sample output:
image-000.ppm image-1025.ppm image-1140.ppm image-1256.ppm image-247.ppm image-374.ppm image-501.ppm image-628.ppm image-755.ppm image-882.ppm
image-001.ppm image-1026.ppm image-1141.ppm image-1257.ppm image-248.ppm image-375.ppm image-502.ppm image-629.ppm image-756.ppm image-883.ppm
image-002.ppm image-1027.ppm image-1142.ppm image-1258.ppm image-249.ppm image-376.ppm image-503.ppm image-630.ppm image-757.ppm image-884.ppm
Normally, all images are written as PBM (for monochrome images) or PPM (for non-monochrome images) files. With the -j option, images in DCT format are saved as JPEG files. All non-DCT images are saved in PBM/PPM format as usual:
$ pdfimages -j bar.pdf /tmp/image
The -f option Specifies the first page to scan. To scan first 5 pages, enter:
$ pdfimages -j -f 5 bar.pdf /tmp/image
The -l option specifies the last page to scan. To scan last 5 pages, enter:
$ pdfimages -j -l 5 bar.pdf /tmp/image
URL to article: http://www.cyberciti.biz/faq/easily-extract-images-from-pdf-file/
Dump the Text on a Virtual Console to a File
If you’re trying to fix a problem, you might want to capture the output of a command for reproduction on a website forum, along with the command you typed to get the results. If you’re working in a terminal window, you can cut and paste, but what if you’re working at a virtual console? If you simply want to capture the result of a command, just redirect the output:
ls > output.txt 2>&1
This will send both the output and error output (if any) of the ls command to output.txt. If you want to capture the command you typed and any other command-line detritus (including output), use the screendump command. The following will send everything currently on the current screen (command-line prompts included) to a text file called output.txt:
sudo screendump > output.txt
The command has to be issued as root because of permission issues, but the resulting file will be owned by you.
Tested on Ubuntu 8.04
source www.ubuntukungfu.org
ls > output.txt 2>&1
This will send both the output and error output (if any) of the ls command to output.txt. If you want to capture the command you typed and any other command-line detritus (including output), use the screendump command. The following will send everything currently on the current screen (command-line prompts included) to a text file called output.txt:
sudo screendump > output.txt
The command has to be issued as root because of permission issues, but the resulting file will be owned by you.
Tested on Ubuntu 8.04
source www.ubuntukungfu.org
find command example
The full syntax of find command
find [path][options][tests][actioins]
Grand, we can now find files based on a subset of criteria. What would be even better is to apply some actions on those files. Action can be done with the use of -exec switch.
We can now find .avi file that are newer that 15 days, in this example, we are going to move those file to another location: /my/new/movies . I consider that this directory already exist on your system.
Moving .avi files bigger than 700M and younger than 15 days to /my/new/movies can be done with:
ex:- find /home/ -name '*.avi' -a -size +700M -mtime -15 -exec mv '{}' /my/new/movies/ \;
ex:- find /home -name '*.pl' -ls
ex:- search in root directory downwards all files which have exactly 2 links
find / -links 2 -print
ex:- search in root directory downwards all files which have less than 2 links
find / -links -2 -print
ex:- search in root directory downwards all files which have more than 2 links
find / -links +2 -print
ex:-
search all the directories from / downwards for files whose inode number is 4014896 and print them
find / -inum 4014896 -print
ex:- search in root directory and downwards all files which have permission 777
find / -perm 777 -print
ex:- search in root directory and downwards all files whose owner is guest and group is guest
find / \( -user guest -a -group guest \) -print
ex:- search in root directory and downwards all files whose owner is guest or whose name is xyz
find / \( -user guest -o -name xyz \) -print
search in current directory downwards all files whose size is 10 bytes
find . -size 10c -print
Suffix Meaning
b 512-byte blocks (the default)
c Bytes
k Kilobytes (KB)
M Megabytes (MB)
G Gigabytes (GB)
Ex: - search in current directory downwards all files which were accessed exactly 7 days back
find . -atime 7 -print
Mind the use of '{}' and \; (there is a space before \;).
'{}' matches the file that was found, while \; terminate the exec statement.
Ex:
find the files on the current folder owned by scott? Use find with the -user option, followed by the user name (or the user number, which you can find in /etc/passwd):
$ find . -user scott
find -type
One of the most useful options for find is -type, which allows you to specify the type of object you wish to look for. Remember that everything on a UNIX system is a file (covered back in Chapter 1, "Things to Know About Your Command Line," in the "Everything Is a File" section), so what you're actually indicating is the type of file you want find to ferret out for you. Table 10.2 lists the file types you can use with find.
File Type Letter Meaning
f Regular file
d Directory
l Symbolic (soft) link
b Block special file
c Character special file
p FIFO (First In First Out)
s Socket
Ex:
$ find Steely_Dan/ -type d
$ find Steely_Dan/ -type d | sort
find -a
A key feature of find is the ability to join several options to more tightly focus your searches. You can link together as many options as you'd like with -a (or -and).
$ find . -name "Rolling_Stones*" -a -type f
$ find . -name " Rolling_Stones* " -a -type f | wc -l
find -o, we can also utilize -o (or -or) to combine options using OR.
Ex:
find . -size +10M -o -size 10M
find . \( -size +10M -o -size 10M \) ! -name "*25*"
find . \( -name "*mp3*" -o -name "*.ogg*" \) -a -type f | wc -l
find . \( -name "*mp3*" -o -name "*.ogg*" -o -name "*.flac*" \) -a -type f | wc -l
Execute a Command on Every Found File, find -exec
Ex:
find . -name " *MP3 " -exec rename's/MP3/mp3/g' {} \;
The rename command is followed with instructions for the name change in this format: s/old/new/g. (The s stands for "substitute," while the g stands for "global.")
ex: find . -name "* *m3u" -exec rename 's/\ /_/g' {}
Print Find Results into a File, find -fprint
Ex: find . ! \( -name "*mp*" -o -name "*ogg" -o -name "*flac" -o -type d \) -fprint non_music_files.txt

find [path][options][tests][actioins]
Grand, we can now find files based on a subset of criteria. What would be even better is to apply some actions on those files. Action can be done with the use of -exec switch.
We can now find .avi file that are newer that 15 days, in this example, we are going to move those file to another location: /my/new/movies . I consider that this directory already exist on your system.
Moving .avi files bigger than 700M and younger than 15 days to /my/new/movies can be done with:
ex:- find /home/ -name '*.avi' -a -size +700M -mtime -15 -exec mv '{}' /my/new/movies/ \;
ex:- find /home -name '*.pl' -ls
ex:- search in root directory downwards all files which have exactly 2 links
find / -links 2 -print
ex:- search in root directory downwards all files which have less than 2 links
find / -links -2 -print
ex:- search in root directory downwards all files which have more than 2 links
find / -links +2 -print
ex:-
search all the directories from / downwards for files whose inode number is 4014896 and print them
find / -inum 4014896 -print
ex:- search in root directory and downwards all files which have permission 777
find / -perm 777 -print
ex:- search in root directory and downwards all files whose owner is guest and group is guest
find / \( -user guest -a -group guest \) -print
ex:- search in root directory and downwards all files whose owner is guest or whose name is xyz
find / \( -user guest -o -name xyz \) -print
search in current directory downwards all files whose size is 10 bytes
find . -size 10c -print
Suffix Meaning
b 512-byte blocks (the default)
c Bytes
k Kilobytes (KB)
M Megabytes (MB)
G Gigabytes (GB)
Ex: - search in current directory downwards all files which were accessed exactly 7 days back
find . -atime 7 -print
Mind the use of '{}' and \; (there is a space before \;).
'{}' matches the file that was found, while \; terminate the exec statement.
Ex:
find the files on the current folder owned by scott? Use find with the -user option, followed by the user name (or the user number, which you can find in /etc/passwd):
$ find . -user scott
find -type
One of the most useful options for find is -type, which allows you to specify the type of object you wish to look for. Remember that everything on a UNIX system is a file (covered back in Chapter 1, "Things to Know About Your Command Line," in the "Everything Is a File" section), so what you're actually indicating is the type of file you want find to ferret out for you. Table 10.2 lists the file types you can use with find.
File Type Letter Meaning
f Regular file
d Directory
l Symbolic (soft) link
b Block special file
c Character special file
p FIFO (First In First Out)
s Socket
Ex:
$ find Steely_Dan/ -type d
$ find Steely_Dan/ -type d | sort
find -a
A key feature of find is the ability to join several options to more tightly focus your searches. You can link together as many options as you'd like with -a (or -and).
$ find . -name "Rolling_Stones*" -a -type f
$ find . -name " Rolling_Stones* " -a -type f | wc -l
find -o, we can also utilize -o (or -or) to combine options using OR.
Ex:
find . -size +10M -o -size 10M
find . \( -size +10M -o -size 10M \) ! -name "*25*"
find . \( -name "*mp3*" -o -name "*.ogg*" \) -a -type f | wc -l
find . \( -name "*mp3*" -o -name "*.ogg*" -o -name "*.flac*" \) -a -type f | wc -l
Execute a Command on Every Found File, find -exec
Ex:
find . -name " *MP3 " -exec rename's/MP3/mp3/g' {} \;
The rename command is followed with instructions for the name change in this format: s/old/new/g. (The s stands for "substitute," while the g stands for "global.")
ex: find . -name "* *m3u" -exec rename 's/\ /_/g' {}
Print Find Results into a File, find -fprint
Ex: find . ! \( -name "*mp*" -o -name "*ogg" -o -name "*flac" -o -type d \) -fprint non_music_files.txt

Tuesday, October 28, 2008
Instantly Hide a File or Folder
Any file or folder whose name is preceded with a period (.) is hidden from view in Nautilus and also won’t appear in the list of shell commands such as ls, unless the user specifically chooses to view hidden files (ls -a, or clicking View –> Show Hidden Files in Nautilus). So to hide a file or folder, just rename it (select it and hit F2), and then put a period in front of the filename. Gone. If the file doesn’t vanish, hit F5 to refresh the file listing. To return the file to view, just remove the period. If you want to make a file disappear from Nautilus’ view of files (including the desktop) but still appear in command-line listings, add a tilde symbol (~) to the end. For example, to hide partypicture.jpg, change its filename to partypicture.jpg~. To hide text file, change its name to text file~.
source www.ubuntukungfu.org
source www.ubuntukungfu.org
VLC: Command Line Interface
This post is for those who love to work out everything from the terminal. Ever wondered how to play media files (audio or video) from the terminal ?? Check out !! First a screenshot :
In this post I assume you are using Linux with VLC installed. VLC has a very simple GUI with lots of functionality. It also has an equally powerful command line ncurses interface. You can open this interface by typing in the terminal :
$ vlc -I ncurses
Now press ‘h’ to explore the help.
Let me show some usage. Say you have some mp3 files in ~/Music and you want to add all these mp3 files in playlist. You need to type in the terminal :
$ vlc -I ncurses ~/Music/*.mp3
And you will find all these files in the playlist.
There are a plenty of options like you can increase volume by pressing ‘a’ and decrease volume by ‘z’. For loads of other options press ‘h’ and browse through the help.
Keep rocking !
source:- http://spsneo.com/blog/2008/08/05/vlc-command-line-interface/
In this post I assume you are using Linux with VLC installed. VLC has a very simple GUI with lots of functionality. It also has an equally powerful command line ncurses interface. You can open this interface by typing in the terminal :
$ vlc -I ncurses
Now press ‘h’ to explore the help.
Let me show some usage. Say you have some mp3 files in ~/Music and you want to add all these mp3 files in playlist. You need to type in the terminal :
$ vlc -I ncurses ~/Music/*.mp3
And you will find all these files in the playlist.
There are a plenty of options like you can increase volume by pressing ‘a’ and decrease volume by ‘z’. For loads of other options press ‘h’ and browse through the help.
Keep rocking !
source:- http://spsneo.com/blog/2008/08/05/vlc-command-line-interface/
Sunday, October 26, 2008
Linux change the speed and duplex settings of an Ethernet card
Posted By vivek On May 2, 2006 @ 4:50 pm In CentOS, Debian / Ubuntu, Linux, Networking, RedHat and Friends, Suse, Ubuntu Linux | 8 Comments
Q. How do I change the speed, duplex on for my Ethernet card?
A. Under Linux use mii-tool or ethtool package which allows a Linux sys admin to modify/change and view the negotiated speed of network interface card (NIC) i.e. it is useful for forcing specific Ethernet speed and duplex settings.
Depending on which type of Ethernet card is installed on the system you need to use either mii-tool or ethtool. I recommend installing both and use one of the tool, which will work with your card.
Task: Install mii-tool and ethtool tools
If you are using Debian Linux you can install both of these package with following command:# apt-get install ethtool net-toolsIf you are using Red Hat Enterprise Linux you can install both of these package with following command:# up2date ethtool net-toolsIf you are using Fedora Core Linux you can install both of these package with following command:# yum install ethtool net-tools
Task: Get speed and other information for eth0
Type following command as root user:
# ethtool eth0Output:
Settings for eth0:
Supported ports: [ TP MII ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
Advertised auto-negotiation: Yes
Speed: 100Mb/s
Duplex: Full
Port: MII
PHYAD: 32
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: pumbg
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
Or use mii-tool command as follows:# mii-tool eth0Output:
eth0: negotiated 100baseTx-FD flow-control, link ok
Task: Change the speed and duplex settings
Setup eth0 negotiated speed with mii-tool
Disable autonegotiation, and force the MII to either 100baseTx-FD, 100baseTx-HD, 10baseT-FD, or 10baseT-HD:# mii-tool -F 100baseTx-HD
# mii-tool -F 10baseT-HDSetup eth0 negotiated speed with ethtool# ethtool -s eth0 speed 100 duplex full
# ethtool -s eth0 speed 10 duplex halfTo make these settings permanent you need to create a shell script and call from /etc/rc.local (Red Hat) or if you are using Debian create a script into the directory /etc/init.d/ directory and run update-rc.d command to update the script.
Read man page of mii-tool and ethtool for more information.
Q. How do I change the speed, duplex on for my Ethernet card?
A. Under Linux use mii-tool or ethtool package which allows a Linux sys admin to modify/change and view the negotiated speed of network interface card (NIC) i.e. it is useful for forcing specific Ethernet speed and duplex settings.
Depending on which type of Ethernet card is installed on the system you need to use either mii-tool or ethtool. I recommend installing both and use one of the tool, which will work with your card.
Task: Install mii-tool and ethtool tools
If you are using Debian Linux you can install both of these package with following command:# apt-get install ethtool net-toolsIf you are using Red Hat Enterprise Linux you can install both of these package with following command:# up2date ethtool net-toolsIf you are using Fedora Core Linux you can install both of these package with following command:# yum install ethtool net-tools
Task: Get speed and other information for eth0
Type following command as root user:
# ethtool eth0Output:
Settings for eth0:
Supported ports: [ TP MII ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
Advertised auto-negotiation: Yes
Speed: 100Mb/s
Duplex: Full
Port: MII
PHYAD: 32
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: pumbg
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
Or use mii-tool command as follows:# mii-tool eth0Output:
eth0: negotiated 100baseTx-FD flow-control, link ok
Task: Change the speed and duplex settings
Setup eth0 negotiated speed with mii-tool
Disable autonegotiation, and force the MII to either 100baseTx-FD, 100baseTx-HD, 10baseT-FD, or 10baseT-HD:# mii-tool -F 100baseTx-HD
# mii-tool -F 10baseT-HDSetup eth0 negotiated speed with ethtool# ethtool -s eth0 speed 100 duplex full
# ethtool -s eth0 speed 10 duplex halfTo make these settings permanent you need to create a shell script and call from /etc/rc.local (Red Hat) or if you are using Debian create a script into the directory /etc/init.d/ directory and run update-rc.d command to update the script.
Read man page of mii-tool and ethtool for more information.
Linux LAN card: Find out full duplex / half speed or mode
Linux LAN card: Find out full duplex / half speed or mode
Posted By vivek On January 27, 2006 @ 1:55 am In CentOS, Debian / Ubuntu, Linux, Networking, RedHat and Friends, Suse, Troubleshooting | 4 Comments
Q. How do I find out if my Lan (NIC) card working at full or halt duplex mode / speed under Linux?
A. LAN card or NIC is use to send and receive data. Technically, we use word Duplex for this functionality. Full duplex means you are able to send and receive data (files) simultaneously. In half duplex, you can either send or receive data at a time (i.e. you cannot send receive data (files) simultaneously). Obviously, full duplex gives you best user experience. However, how can I find out whether I am using full duplex/half duplex speed/mode?
Task: Find full or half duplex speed
You can use dmesg command to find out your duplex mode:
# dmesg | grep -i duplex
Output:
eth0: link up, 100Mbps, full-duplex, lpa 0x45E1
ethtool command
Uss ethtool to display or change ethernet card settings. To display duplex speed, enter:
# ethtool eth1
Output:
Settings for eth1:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: umbg
Wake-on: g
Current message level: 0x00000007 (7)
Link detected: yes
mii-tool command
You can also use mii-tool to find out your duplex mode. Type following command at shell prompt:
# mii-tool
Output:
eth0: negotiated 100baseTx-FD flow-control, link ok
Remember,
1. 100baseTx-FD: 100Mbps full duplex (FD)
2. 100baseTx-HD: 100Mbps half duplex (HD)
3. 10baseT-FD: 10Mbps full duplex (FD)
4. 10baseT-HD: 10Mbps half duplex (HD)
mii-tool utility checks or sets the status of a network interface̢۪s Media Independent Interface (MII) unit. Most fast ethernet adapters use an MII to autonegotiate link speed and duplex setting. If you are using old card then this utility may not work (use dmesg command).
This utility is useful for forcing specific Ethernet speed and duplex settings too, setup 100Mbps full duplex speed under Linux:
# mii-tool -F 100baseTx-FD
Setup 10Mbps half duplex:
# mii-tool -F 10baseT-HD
You can find more information about setting duplex speed here using ethtool command.
Posted By vivek On January 27, 2006 @ 1:55 am In CentOS, Debian / Ubuntu, Linux, Networking, RedHat and Friends, Suse, Troubleshooting | 4 Comments
Q. How do I find out if my Lan (NIC) card working at full or halt duplex mode / speed under Linux?
A. LAN card or NIC is use to send and receive data. Technically, we use word Duplex for this functionality. Full duplex means you are able to send and receive data (files) simultaneously. In half duplex, you can either send or receive data at a time (i.e. you cannot send receive data (files) simultaneously). Obviously, full duplex gives you best user experience. However, how can I find out whether I am using full duplex/half duplex speed/mode?
Task: Find full or half duplex speed
You can use dmesg command to find out your duplex mode:
# dmesg | grep -i duplex
Output:
eth0: link up, 100Mbps, full-duplex, lpa 0x45E1
ethtool command
Uss ethtool to display or change ethernet card settings. To display duplex speed, enter:
# ethtool eth1
Output:
Settings for eth1:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: umbg
Wake-on: g
Current message level: 0x00000007 (7)
Link detected: yes
mii-tool command
You can also use mii-tool to find out your duplex mode. Type following command at shell prompt:
# mii-tool
Output:
eth0: negotiated 100baseTx-FD flow-control, link ok
Remember,
1. 100baseTx-FD: 100Mbps full duplex (FD)
2. 100baseTx-HD: 100Mbps half duplex (HD)
3. 10baseT-FD: 10Mbps full duplex (FD)
4. 10baseT-HD: 10Mbps half duplex (HD)
mii-tool utility checks or sets the status of a network interface̢۪s Media Independent Interface (MII) unit. Most fast ethernet adapters use an MII to autonegotiate link speed and duplex setting. If you are using old card then this utility may not work (use dmesg command).
This utility is useful for forcing specific Ethernet speed and duplex settings too, setup 100Mbps full duplex speed under Linux:
# mii-tool -F 100baseTx-FD
Setup 10Mbps half duplex:
# mii-tool -F 10baseT-HD
You can find more information about setting duplex speed here using ethtool command.
Get Information About Your BIOS / Server Hardware From a Shell Without Opening Chassis ( BIOS Decoder )
Posted By vivek On January 24, 2008 @ 9:08 pm In Hardware, Howto, Linux, Linux desktop | 13 Comments
[1]
biosdecode is a command line utility to parses the BIOS memory and prints information about all structures (or entry points) it knows of. You can find out more information about your hardware such as:
=> IPMI Device
=> Type of memory and speed
=> Chassis Information
=> Temperature Probe
=> Cooling Device
=> Electrical Current Probe
=> Processor and Memory Information
=> Serial numbers
=> BIOS version
=> PCI / PCIe Slots and Speed
=> Much more
biosdecode parses the BIOS memory and prints the following information about all structures :
=> SMBIOS (System Management BIOS)
=> DMI (Desktop Management Interface, a legacy version of SMBIOS)
=> SYSID
=> PNP (Plug and Play)
=> ACPI (Advanced Configuration and Power Interface)
=> BIOS32 (BIOS32 Service Directory)
=> PIR (PCI IRQ Routing)
=> 32OS (BIOS32 Extension, Compaq-specific)
=> VPD (Vital Product Data, IBM-specific)
=> FJKEYINF (Application Panel, Fujitsu-specific)
In this tip you will learn about decoding BIOS data (dumping a computer's DMI ) and getting all information about computer hardware without rebooting the server.
More about the DMI tables
The DMI table doesn’t only describe what the system is currently made of, it also can report the possible evolutions such as the fastest supported CPU or the maximal amount of memory supported.
dmidecode - Read biosdecode data in a human-readable format
Data provided by biosdecode is not in a human-readable format. You need to use dmidecode command for dumping a computer’s DMI (SMBIOS) table contents on screen. This table contains a description of the system’s hardware components, as well as other useful pieces of information such as serial numbers and BIOS revision. Thanks to this table, you can retrieve this information without having to probe for the actual hardware.
Task: Display information about IPMI Device
# dmidecode --type 38
Output:
# dmidecode 2.7
SMBIOS 2.4 present.
Handle 0x0029, DMI type 38, 18 bytes.
IPMI Device Information
Interface Type: KCS (Keyboard Control Style)
Specification Version: 2.0
I2C Slave Address: 0x10
NV Storage Device: Not Present
Base Address: 0x0000000000000CA2 (I/O)
Register Spacing: Successive Byte Boundaries
Task: Display information about PCI / PCIe Slots
# dmidecode --type 9
# dmidecode 2.7
SMBIOS 2.4 present.
Handle 0x000E, DMI type 9, 13 bytes.
System Slot Information
Designation: PCIX#1-133MHz
Type: 64-bit PCI-X
Current Usage: Available
Length: Long
ID: 1
Characteristics:
3.3 V is provided
Handle 0x000F, DMI type 9, 13 bytes.
System Slot Information
Designation: PCIX#2-100MHz
Type: 64-bit PCI-X
Current Usage: Available
Length: Long
ID: 2
Characteristics:
3.3 V is provided
Handle 0x0010, DMI type 9, 13 bytes.
System Slot Information
Designation: PCIE#3-x8
Type: Other
Current Usage: Available
Length: Other
Characteristics:
3.3 V is provided
Handle 0x0011, DMI type 9, 13 bytes.
System Slot Information
Designation: PCIE#4-x8
Type: Other
Current Usage: Available
Length: Other
Characteristics:
3.3 V is provided
Handle 0x0012, DMI type 9, 13 bytes.
System Slot Information
Designation: PCIE#5-x8
Type: Other
Current Usage: Available
Length: Other
Characteristics:
3.3 V is provided
Task: Find out Information about BIOS
# dmidecode --type 0
Output:
# dmidecode 2.7
SMBIOS 2.4 present.
Handle 0x0000, DMI type 0, 24 bytes.
BIOS Information
Vendor: Phoenix Technologies LTD
Version: 6.00
Release Date: 01/26/2007
Address: 0xE56C0
Runtime Size: 108864 bytes
ROM Size: 1024 kB
Characteristics:
PCI is supported
PNP is supported
BIOS is upgradeable
BIOS shadowing is allowed
ESCD support is available
Boot from CD is supported
Selectable boot is supported
EDD is supported
3.5"/2.88 MB floppy services are supported (int 13h)
ACPI is supported
USB legacy is supported
LS-120 boot is supported
ATAPI Zip drive boot is supported
BIOS boot specification is supported
Targeted content distribution is supported
Understanding BIOS keywords
dmidecode --type {KEYWORD / Number }
You need to pass dmidecode following keywords:
* bios
* system
* baseboard
* chassis
* processor
* memory
* cache
* connector
* slot
All DMI types you need to use with dmidecode --type {Number}:


Display Power supply information, enter:
# dmidecode --type 39
Display CPU information, enter:
# dmidecode --type processor
Read man page for more information:
$ man dmidecode
[1]
biosdecode is a command line utility to parses the BIOS memory and prints information about all structures (or entry points) it knows of. You can find out more information about your hardware such as:
=> IPMI Device
=> Type of memory and speed
=> Chassis Information
=> Temperature Probe
=> Cooling Device
=> Electrical Current Probe
=> Processor and Memory Information
=> Serial numbers
=> BIOS version
=> PCI / PCIe Slots and Speed
=> Much more
biosdecode parses the BIOS memory and prints the following information about all structures :
=> SMBIOS (System Management BIOS)
=> DMI (Desktop Management Interface, a legacy version of SMBIOS)
=> SYSID
=> PNP (Plug and Play)
=> ACPI (Advanced Configuration and Power Interface)
=> BIOS32 (BIOS32 Service Directory)
=> PIR (PCI IRQ Routing)
=> 32OS (BIOS32 Extension, Compaq-specific)
=> VPD (Vital Product Data, IBM-specific)
=> FJKEYINF (Application Panel, Fujitsu-specific)
In this tip you will learn about decoding BIOS data (dumping a computer's DMI ) and getting all information about computer hardware without rebooting the server.
More about the DMI tables
The DMI table doesn’t only describe what the system is currently made of, it also can report the possible evolutions such as the fastest supported CPU or the maximal amount of memory supported.
dmidecode - Read biosdecode data in a human-readable format
Data provided by biosdecode is not in a human-readable format. You need to use dmidecode command for dumping a computer’s DMI (SMBIOS) table contents on screen. This table contains a description of the system’s hardware components, as well as other useful pieces of information such as serial numbers and BIOS revision. Thanks to this table, you can retrieve this information without having to probe for the actual hardware.
Task: Display information about IPMI Device
# dmidecode --type 38
Output:
# dmidecode 2.7
SMBIOS 2.4 present.
Handle 0x0029, DMI type 38, 18 bytes.
IPMI Device Information
Interface Type: KCS (Keyboard Control Style)
Specification Version: 2.0
I2C Slave Address: 0x10
NV Storage Device: Not Present
Base Address: 0x0000000000000CA2 (I/O)
Register Spacing: Successive Byte Boundaries
Task: Display information about PCI / PCIe Slots
# dmidecode --type 9
# dmidecode 2.7
SMBIOS 2.4 present.
Handle 0x000E, DMI type 9, 13 bytes.
System Slot Information
Designation: PCIX#1-133MHz
Type: 64-bit PCI-X
Current Usage: Available
Length: Long
ID: 1
Characteristics:
3.3 V is provided
Handle 0x000F, DMI type 9, 13 bytes.
System Slot Information
Designation: PCIX#2-100MHz
Type: 64-bit PCI-X
Current Usage: Available
Length: Long
ID: 2
Characteristics:
3.3 V is provided
Handle 0x0010, DMI type 9, 13 bytes.
System Slot Information
Designation: PCIE#3-x8
Type: Other
Current Usage: Available
Length: Other
Characteristics:
3.3 V is provided
Handle 0x0011, DMI type 9, 13 bytes.
System Slot Information
Designation: PCIE#4-x8
Type: Other
Current Usage: Available
Length: Other
Characteristics:
3.3 V is provided
Handle 0x0012, DMI type 9, 13 bytes.
System Slot Information
Designation: PCIE#5-x8
Type: Other
Current Usage: Available
Length: Other
Characteristics:
3.3 V is provided
Task: Find out Information about BIOS
# dmidecode --type 0
Output:
# dmidecode 2.7
SMBIOS 2.4 present.
Handle 0x0000, DMI type 0, 24 bytes.
BIOS Information
Vendor: Phoenix Technologies LTD
Version: 6.00
Release Date: 01/26/2007
Address: 0xE56C0
Runtime Size: 108864 bytes
ROM Size: 1024 kB
Characteristics:
PCI is supported
PNP is supported
BIOS is upgradeable
BIOS shadowing is allowed
ESCD support is available
Boot from CD is supported
Selectable boot is supported
EDD is supported
3.5"/2.88 MB floppy services are supported (int 13h)
ACPI is supported
USB legacy is supported
LS-120 boot is supported
ATAPI Zip drive boot is supported
BIOS boot specification is supported
Targeted content distribution is supported
Understanding BIOS keywords
dmidecode --type {KEYWORD / Number }
You need to pass dmidecode following keywords:
* bios
* system
* baseboard
* chassis
* processor
* memory
* cache
* connector
* slot
All DMI types you need to use with dmidecode --type {Number}:


Display Power supply information, enter:
# dmidecode --type 39
Display CPU information, enter:
# dmidecode --type processor
Read man page for more information:
$ man dmidecode
Thursday, October 23, 2008
Getting an Asus P5KPL-CM onboard LAN to work on ubuntu 8.04.1
Aclark
No need to spend more money. I just built a new machine with a P5Q Asus board. Using the instructions in the readme on the CD I got the onboard Gigabit Lan working perfectly.
If you are willing to do a little command line work it doesn't take long.
I created a folder named Drivers in my home folder and copied l1e-l2e-linux-v1.0.0.4.tar.bz2 to it from the Asus CD.
cd to that folder and
CODE
tar zxf l1e-l2e-linux-v1.0.0.4.tar.bz2
Now there will be two new folders - in my case the path was home/awc/Drivers/atl1e/src
change directory to the /src folder and
CODE
make install
This gave me an error message - something about "CFlags" and fixing it to use "EXTRA_CFLAGS"
A quick search turned up a suggestion to try
CODE
sudo KBUILD_NOPEDANTIC=1 make install
That did the trick for me and now the driver module has been created and is in /lib/modules/2.6.24-16-generic/kernel/drivers/net/atl1e
2.6.24-16-generic is the kernel that my Hardy install is using - you may have to sustitute if yours is different.
so now
CODE
cd /lib/modules//kernel/drivers/net/atl1e
sudo insmod ./atl1e.ko
Now the module is loaded into the kernel and the lan works.
Their readme file goes on to give instructions for configuring the interface but my lan was instantly connected - I suppose by Network Manager. I prefer WiCD so later when I installed that I set my preferred config in WiCd.
I hope this makes sense - good luck !
I have to get up early tomorrow so don't think me rude if I don't answer any questions - -I'm sleeping.
PS Since we manually added this module I'm guessing we may have to do this again the next time there is a kernel update so save the instructions (If they work for you)
jcliburn
The driver for the L2e NIC (PCI ID 1969:1026) was recently accepted into the mainline kernel. It should be released with 2.6.27.
see: http://ubuntuforums.org/showthread.php?p=5551978
No need to spend more money. I just built a new machine with a P5Q Asus board. Using the instructions in the readme on the CD I got the onboard Gigabit Lan working perfectly.
If you are willing to do a little command line work it doesn't take long.
I created a folder named Drivers in my home folder and copied l1e-l2e-linux-v1.0.0.4.tar.bz2 to it from the Asus CD.
cd to that folder and
CODE
tar zxf l1e-l2e-linux-v1.0.0.4.tar.bz2
Now there will be two new folders - in my case the path was home/awc/Drivers/atl1e/src
change directory to the /src folder and
CODE
make install
This gave me an error message - something about "CFlags" and fixing it to use "EXTRA_CFLAGS"
A quick search turned up a suggestion to try
CODE
sudo KBUILD_NOPEDANTIC=1 make install
That did the trick for me and now the driver module has been created and is in /lib/modules/2.6.24-16-generic/kernel/drivers/net/atl1e
2.6.24-16-generic is the kernel that my Hardy install is using - you may have to sustitute if yours is different.
so now
CODE
cd /lib/modules/
sudo insmod ./atl1e.ko
Now the module is loaded into the kernel and the lan works.
Their readme file goes on to give instructions for configuring the interface but my lan was instantly connected - I suppose by Network Manager. I prefer WiCD so later when I installed that I set my preferred config in WiCd.
I hope this makes sense - good luck !
I have to get up early tomorrow so don't think me rude if I don't answer any questions - -I'm sleeping.
PS Since we manually added this module I'm guessing we may have to do this again the next time there is a kernel update so save the instructions (If they work for you)
jcliburn
The driver for the L2e NIC (PCI ID 1969:1026) was recently accepted into the mainline kernel. It should be released with 2.6.27.
see: http://ubuntuforums.org/showthread.php?p=5551978
at Vs Cron ?
can anyone explain me the tradeoff b/w at command and cron command ?
is there any special situation where at command should only be used?
which command is more preferred at or cron ?
kpkeerthi
'at' lets you run a task (a command or a program) at a scheduled time once.
'cron' lets you schedule a task to run periodically. You define the 'period' using cron expressions.
More info: https://help.ubuntu.com/community/CronHowto
billymayday
Typically at is a once off whereas cron is recurring. Neither is preferred, they are for different purposes (see previous sentence).
I use at quite a bit to start a process from an ssh session when I want to disconnect and leae it running. It's also good for doing downloads later in the night, etc.
Cron does routine tasks like backups that happen every day/week/whatever. I have one that marks my spam folder read every hour.
colucix
One advantage of the at command is that it preserves the user's shell environment, whereas cron runs in its own limited environment. But as billymayday already pointed out, at is for delayed tasks, cron is for recurrence.
is there any special situation where at command should only be used?
which command is more preferred at or cron ?
kpkeerthi
'at' lets you run a task (a command or a program) at a scheduled time once.
'cron' lets you schedule a task to run periodically. You define the 'period' using cron expressions.
More info: https://help.ubuntu.com/community/CronHowto
billymayday
Typically at is a once off whereas cron is recurring. Neither is preferred, they are for different purposes (see previous sentence).
I use at quite a bit to start a process from an ssh session when I want to disconnect and leae it running. It's also good for doing downloads later in the night, etc.
Cron does routine tasks like backups that happen every day/week/whatever. I have one that marks my spam folder read every hour.
colucix
One advantage of the at command is that it preserves the user's shell environment, whereas cron runs in its own limited environment. But as billymayday already pointed out, at is for delayed tasks, cron is for recurrence.
Tuesday, October 21, 2008
.sudo_as_admin_successful
tefflox
".sudo_as_admin_successful"... should I be paranoid? what is this file? does this mean somebody cracked my system?
NetworkGuy
I also have the same file. I don't think your system has been hacked.
breno leitao
This is just a flag that bash check to see if it is your first time as a sudoer. I also think this is ubuntu specifically.
Old soldier 2003
yup delete it and then run bash:
Code:
To run a command as administrator (user "root"), use "sudo".
See "man sudo_root" for details.
".sudo_as_admin_successful"... should I be paranoid? what is this file? does this mean somebody cracked my system?
NetworkGuy
I also have the same file. I don't think your system has been hacked.
breno leitao
This is just a flag that bash check to see if it is your first time as a sudoer. I also think this is ubuntu specifically.
Old soldier 2003
yup delete it and then run bash:
Code:
To run a command as administrator (user "root"), use "sudo
See "man sudo_root" for details.
at command examples
Ex: schedule a download
echo 'wget -c www.example.com/files.iso' | at 09:00
Ex:
at 0915 am mar 24
echo "Good Morning"
^d
Ex:
at now + 15 minutes
clear
ls -l
^d
Ex:
at 18:32 tomorrow
echo "Happy Birthday"
^d
Ex:
at 6 pm wednesday next week
who
^d
Ex:
at now + 1 week < atfile
Ex:
at 15:08 december 18, 2009 + 4 years
Ex:
root@petrescu:~ # at 5:30
warning: commands will be executed using /bin/sh
at> mpg123 /home/adrian/alarm.mp3
at>
job 3 at 2007-08-29 05:30
echo 'wget -c www.example.com/files.iso' | at 09:00
Ex:
at 0915 am mar 24
echo "Good Morning"
^d
Ex:
at now + 15 minutes
clear
ls -l
^d
Ex:
at 18:32 tomorrow
echo "Happy Birthday"
^d
Ex:
at 6 pm wednesday next week
who
^d
Ex:
at now + 1 week < atfile
Ex:
at 15:08 december 18, 2009 + 4 years
Ex:
root@petrescu:~ # at 5:30
warning: commands will be executed using /bin/sh
at> mpg123 /home/adrian/alarm.mp3
at>
job 3 at 2007-08-29 05:30
Monday, October 20, 2008
echo "echo hello world" | at now
sulekha
I gave the following command echo "echo hello world" | at now at tyhe terminal to test whether echo command is working, to get the o/p
warning: commands will be executed using /bin/sh
job 12 at Tue Oct 21 10:52:00 2008
but the echo command didn't execute
i edited /etc/at.deny file to remove the line bin
what should i do to make at command working ?
N.B:- I use ubuntu 8.04.1
echo "echo hello world" | at now
Quarles
Re: echo "echo hello world" | at now
Well, at is kind of weird utility that works differently than people would expect. Two things to understand:
1) The at command runs in a separate terminal, not the one you entered the command into.
2) The at command pipes stdout into stdin at the time set in the command. In other words, you need to a command that produces stdout, like echo. For example:
Code:
date > /home/$USER/now.text | at sometime
will fail. It first redirects the output of date to a file called now.text and then pipes that file into at's terminal at the time specified. Just like if you typed the current time into your own terminal, you get a command not found error.
If, on the other hand, you run this:
Code:
echo "date > /home/$USER/now.text" | at sometime
you are piping the output of the echo command to the at command, which will work correctly: it will produce a file containing the date and time specified in the second command.
So, to do what you are attempting, you need to add two things. One, you need to use "wall" (or similar) to make sure the message is sent to your terminal. Second, you need another "echo" command. So, the correct version looks like this:
Code:
echo 'echo "hello world" | wall' | at now
This will send the message "hello world" to all logged in users.
I gave the following command echo "echo hello world" | at now at tyhe terminal to test whether echo command is working, to get the o/p
warning: commands will be executed using /bin/sh
job 12 at Tue Oct 21 10:52:00 2008
but the echo command didn't execute
i edited /etc/at.deny file to remove the line bin
what should i do to make at command working ?
N.B:- I use ubuntu 8.04.1
echo "echo hello world" | at now
Quarles
Re: echo "echo hello world" | at now
Well, at is kind of weird utility that works differently than people would expect. Two things to understand:
1) The at command runs in a separate terminal, not the one you entered the command into.
2) The at command pipes stdout into stdin at the time set in the command. In other words, you need to a command that produces stdout, like echo. For example:
Code:
date > /home/$USER/now.text | at sometime
will fail. It first redirects the output of date to a file called now.text and then pipes that file into at's terminal at the time specified. Just like if you typed the current time into your own terminal, you get a command not found error.
If, on the other hand, you run this:
Code:
echo "date > /home/$USER/now.text" | at sometime
you are piping the output of the echo command to the at command, which will work correctly: it will produce a file containing the date and time specified in the second command.
So, to do what you are attempting, you need to add two things. One, you need to use "wall" (or similar) to make sure the message is sent to your terminal. Second, you need another "echo" command. So, the correct version looks like this:
Code:
echo 'echo "hello world" | wall' | at now
This will send the message "hello world" to all logged in users.
Linux: Find Out If a Particular Driver / Feature Compiled Into Running Kernel or Not
Posted By vivek On October 10, 2008 @ 8:14 am In BASH Shell, CentOS, Debian / Ubuntu, Linux, Linux / UNIX File Formats, Package management | No Comments
Q. I know how to find out information about compiled driver under FreeBSD kernel. But, how do I find out if a Particular feature, driver or filesystem support is compiled into my running Linux kernel or not? How do I find out if DMA support is compiled into my kernel?
A. Current Linux kernel configuration is stored in .config file or config-$(uname -r) file:
[a] /boot/config-$(uname -r) or /boot/config-$(uname -r)*: Automatically generated kernel config file. This file present under almost all Linux distros including RHEL / CentOS / Fedora / Debian / Ubuntu Linux.
[b] /usr/src/kernels/$(uname -r)-$(uname -m)/.config or /usr/src/linux-2.6.N/.config: Current kernel config file.
If there is not a /usr/src/kernels/$(uname -r)-$(uname -m)/ directory on your system, then the kernel source has not been installed. Use apt-get or yum command to install kernel source.
Find out if DMA support compiled or not, enter:
grep -i DMA .config
OR
grep -i DMA /boot/config-$(uname -r)*
Sample output:
CONFIG_GENERIC_ISA_DMA=y
CONFIG_ISA_DMA_API=y
CONFIG_BLK_DEV_IDEDMA_PCI=y
# CONFIG_BLK_DEV_IDEDMA_FORCED is not set
CONFIG_IDEDMA_PCI_AUTO=y
# CONFIG_IDEDMA_ONLYDISK is not set
# CONFIG_HPT34X_AUTODMA is not set
CONFIG_BLK_DEV_IDEDMA=y
# CONFIG_IDEDMA_IVB is not set
CONFIG_IDEDMA_AUTO=y
CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=1
CONFIG_PDC_ADMA=m
# CONFIG_PATA_OPTIDMA is not set
CONFIG_I2O_EXT_ADAPTEC_DMA64=y
CONFIG_BCM43XX_DMA=y
CONFIG_BCM43XX_DMA_AND_PIO_MODE=y
# CONFIG_BCM43XX_DMA_MODE is not set
CONFIG_CARDMAN_4000=m
CONFIG_CARDMAN_4040=m
# DMA Engine support
CONFIG_DMA_ENGINE=y
# DMA Clients
CONFIG_NET_DMA=y
# DMA Devices
CONFIG_INTEL_IOATDMA=m
CONFIG_HAS_DMA=y
For simplicity, most lines only contain one argument. Anything following a # is considered a comment and ignored. The option CONFIG_HAS_DMA has total 3 possiblities:
* CONFIG_HAS_DMA=y: DMA support compiled.
* CONFIG_HAS_DMA=m: DMA support compiled as a loadable kernel module.
* CONFIG_HAS_DMA=n: No DMA support.
Article printed from Frequently Asked Questions: http://www.cyberciti.biz/faq
URL to article: http://www.cyberciti.biz/faq/linux-kernel-driver-feature-compiled/
Q. I know how to find out information about compiled driver under FreeBSD kernel. But, how do I find out if a Particular feature, driver or filesystem support is compiled into my running Linux kernel or not? How do I find out if DMA support is compiled into my kernel?
A. Current Linux kernel configuration is stored in .config file or config-$(uname -r) file:
[a] /boot/config-$(uname -r) or /boot/config-$(uname -r)*: Automatically generated kernel config file. This file present under almost all Linux distros including RHEL / CentOS / Fedora / Debian / Ubuntu Linux.
[b] /usr/src/kernels/$(uname -r)-$(uname -m)/.config or /usr/src/linux-2.6.N/.config: Current kernel config file.
If there is not a /usr/src/kernels/$(uname -r)-$(uname -m)/ directory on your system, then the kernel source has not been installed. Use apt-get or yum command to install kernel source.
Find out if DMA support compiled or not, enter:
grep -i DMA .config
OR
grep -i DMA /boot/config-$(uname -r)*
Sample output:
CONFIG_GENERIC_ISA_DMA=y
CONFIG_ISA_DMA_API=y
CONFIG_BLK_DEV_IDEDMA_PCI=y
# CONFIG_BLK_DEV_IDEDMA_FORCED is not set
CONFIG_IDEDMA_PCI_AUTO=y
# CONFIG_IDEDMA_ONLYDISK is not set
# CONFIG_HPT34X_AUTODMA is not set
CONFIG_BLK_DEV_IDEDMA=y
# CONFIG_IDEDMA_IVB is not set
CONFIG_IDEDMA_AUTO=y
CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=1
CONFIG_PDC_ADMA=m
# CONFIG_PATA_OPTIDMA is not set
CONFIG_I2O_EXT_ADAPTEC_DMA64=y
CONFIG_BCM43XX_DMA=y
CONFIG_BCM43XX_DMA_AND_PIO_MODE=y
# CONFIG_BCM43XX_DMA_MODE is not set
CONFIG_CARDMAN_4000=m
CONFIG_CARDMAN_4040=m
# DMA Engine support
CONFIG_DMA_ENGINE=y
# DMA Clients
CONFIG_NET_DMA=y
# DMA Devices
CONFIG_INTEL_IOATDMA=m
CONFIG_HAS_DMA=y
For simplicity, most lines only contain one argument. Anything following a # is considered a comment and ignored. The option CONFIG_HAS_DMA has total 3 possiblities:
* CONFIG_HAS_DMA=y: DMA support compiled.
* CONFIG_HAS_DMA=m: DMA support compiled as a loadable kernel module.
* CONFIG_HAS_DMA=n: No DMA support.
Article printed from Frequently Asked Questions: http://www.cyberciti.biz/faq
URL to article: http://www.cyberciti.biz/faq/linux-kernel-driver-feature-compiled/
gmailfs/gspace
sulekha
where can i find a utility/software in ubuntu that enables user to use their gmail account as an online file storage device,????
throbbing brain
sudo apt-get install gmailfs
sulekha
can u suggest a link/tutorial as on how to set-up and use gmailfs on ubuntu
floke
There's also a Firefox extension that does this.
powadha
Or you can just save the trouble and install the Firefox extension https://addons.mozilla.org/firefox/1593/
I've played a bit with libgmail for fun a while back and found it very buggy and slow. Perhaps it has become better but well, too much work for very little gain. The extension installs in seconds and does the job.
Just for the fun of getting it to work this is a nice howto though.
see:-http://www.getgspace.com/howitworks.html
User's guide
1. Once installed Gspace, open your Firefox browser. To access Gspace go to: Tools > Gspace. The Gspace console will appear on screen.
2. Manage your Gmail accounts. You can set up as many as you want. Remember that each one provides 2Gb of storage.
3. Login to any of your Gmail accounts.
4. Gspace console is divided in 4 key areas for you to manage your files: My Computer / My Gspace / Transfers / Status.
5. Enjoy Gspace features:
* File Transfer: to manage files
* Player Mode: to listen your stored music files directly from Gspace
* Photo Mode: to view your collection of pictures
* Gmail Drive: to manage your Gdrive files as well
where can i find a utility/software in ubuntu that enables user to use their gmail account as an online file storage device,????
throbbing brain
sudo apt-get install gmailfs
sulekha
can u suggest a link/tutorial as on how to set-up and use gmailfs on ubuntu
floke
There's also a Firefox extension that does this.
powadha
Or you can just save the trouble and install the Firefox extension https://addons.mozilla.org/firefox/1593/
I've played a bit with libgmail for fun a while back and found it very buggy and slow. Perhaps it has become better but well, too much work for very little gain. The extension installs in seconds and does the job.
Just for the fun of getting it to work this is a nice howto though.
see:-http://www.getgspace.com/howitworks.html
User's guide
1. Once installed Gspace, open your Firefox browser. To access Gspace go to: Tools > Gspace. The Gspace console will appear on screen.
2. Manage your Gmail accounts. You can set up as many as you want. Remember that each one provides 2Gb of storage.
3. Login to any of your Gmail accounts.
4. Gspace console is divided in 4 key areas for you to manage your files: My Computer / My Gspace / Transfers / Status.
5. Enjoy Gspace features:
* File Transfer: to manage files
* Player Mode: to listen your stored music files directly from Gspace
* Photo Mode: to view your collection of pictures
* Gmail Drive: to manage your Gdrive files as well
Sunday, October 19, 2008
Helping When a User Cannot Log In
When a user has trouble logging in on the system, the source may be a user error or a problem with the system software or hardware. The following steps can help determine where the problem is:
• Check the log files in /var/log. The /var/log/messages file accumulates
system errors, messages from daemon processes, and other important
information. It may indicate the cause or more symptoms of a problem.
Also, check the system console. Occasionally messages about system
problems that are not written to /var/log/messages (for instance, a full
disk) are displayed on the system console.
• Determine whether only that one user or only that one user’s terminal/
workstation has a problem or whether the problem is more widespread.
• Check that the user’s Caps Lock key is not on.
• Make sure the user’s home directory exists and corresponds to that user’s
entry in the /etc/passwd file. Verify that the user owns her home directory
and startup files and that they are readable (and, in the case of the home
directory, executable). Confirm that the entry for the user’s login shell in
the /etc/passwd file is accurate and the shell exists as specified.
• Change the user’s password if there is a chance that he has forgotten the
correct password.
• Check the user’s startup files (.profile, .login, .bashrc, and so on). The user
may have edited one of these files and introduced a syntax error that pre-
vents login.
• Check the terminal or monitor data cable from where it plugs into the ter-
minal to where it plugs into the computer (or as far as you can follow it).
Try turning the terminal or monitor off and then turning it back on.
• When the problem appears to be widespread, check whether you can log in
from the system console. Make sure the system is not in recovery mode. If
you cannot log in, the system may have crashed; reboot it and perform any
necessary recovery steps (the system usually does quite a bit automatically).
• If the user is logging in over a network connection, run the appropriate init
script (page 507) to restart the service the user is trying to use (e.g., ssh).
• Use df to check for full filesystems. If the /tmp filesystem or the user’s
home directory is full, login sometimes fails in unexpected ways. In some
cases you may be able to log in to a textual environment but not a graphi-
cal one. When applications that start when the user logs in cannot create
temporary files or cannot update files in the user’s home directory, the
login process itself may terminate.
• Check the log files in /var/log. The /var/log/messages file accumulates
system errors, messages from daemon processes, and other important
information. It may indicate the cause or more symptoms of a problem.
Also, check the system console. Occasionally messages about system
problems that are not written to /var/log/messages (for instance, a full
disk) are displayed on the system console.
• Determine whether only that one user or only that one user’s terminal/
workstation has a problem or whether the problem is more widespread.
• Check that the user’s Caps Lock key is not on.
• Make sure the user’s home directory exists and corresponds to that user’s
entry in the /etc/passwd file. Verify that the user owns her home directory
and startup files and that they are readable (and, in the case of the home
directory, executable). Confirm that the entry for the user’s login shell in
the /etc/passwd file is accurate and the shell exists as specified.
• Change the user’s password if there is a chance that he has forgotten the
correct password.
• Check the user’s startup files (.profile, .login, .bashrc, and so on). The user
may have edited one of these files and introduced a syntax error that pre-
vents login.
• Check the terminal or monitor data cable from where it plugs into the ter-
minal to where it plugs into the computer (or as far as you can follow it).
Try turning the terminal or monitor off and then turning it back on.
• When the problem appears to be widespread, check whether you can log in
from the system console. Make sure the system is not in recovery mode. If
you cannot log in, the system may have crashed; reboot it and perform any
necessary recovery steps (the system usually does quite a bit automatically).
• If the user is logging in over a network connection, run the appropriate init
script (page 507) to restart the service the user is trying to use (e.g., ssh).
• Use df to check for full filesystems. If the /tmp filesystem or the user’s
home directory is full, login sometimes fails in unexpected ways. In some
cases you may be able to log in to a textual environment but not a graphi-
cal one. When applications that start when the user logs in cannot create
temporary files or cannot update files in the user’s home directory, the
login process itself may terminate.
Friday, October 17, 2008
caching mechanism
Hi all,
This is what i have read in a book
Linux uses a caching mechanism for frequently accessed files that speeds the process of locating an inode from a filename. This caching mechanism works only on filenames of up to 30 characters in length, so avoid giving frequently accessed files extremely long filenames.
how valid is this claim ?
what is the name of this caching mechanism ?
iponeverything
Caching is done through VFS. Yes, the claim is real. But how many people exceed 30 char filenames?
File name size is not an issue with 64 bit linux (it essentially doubles).Anyway for a truly fascinating read (I am not being sarcastic!) about the VFS interface in 2.6.26 read this --
ref: http://www.mjmwired.net/kernel/Documentation/filesystems/vfs.txt
This is what i have read in a book
Linux uses a caching mechanism for frequently accessed files that speeds the process of locating an inode from a filename. This caching mechanism works only on filenames of up to 30 characters in length, so avoid giving frequently accessed files extremely long filenames.
how valid is this claim ?
what is the name of this caching mechanism ?
iponeverything
Caching is done through VFS. Yes, the claim is real. But how many people exceed 30 char filenames?
File name size is not an issue with 64 bit linux (it essentially doubles).Anyway for a truly fascinating read (I am not being sarcastic!) about the VFS interface in 2.6.26 read this --
ref: http://www.mjmwired.net/kernel/Documentation/filesystems/vfs.txt
Linux directories do not shrink automatically
Hi ,
this is what i have read in a book
Because Linux directories do not shrink automatically, removing a file from a directory does not shrink the directory, even though it frees up space on the disk.remove unused space and make a directory smaller, you must copy or move all the files to a new directory and remove the original directory.
maxwell lol
There are two different ways to think of the size of a directory.One is the size based on the contents of the directory. du shows this.
The second is the size of the "file" that contains the inode information of the files inside the directory. Don't get these confused.The inode information gives the name of the files inside the directory.
If you do a ls -ld dirname, ls will show you the size of this file.
For instance:
drwxr-xr-x 2 root root 36864 2008-10-13 06:59 bin
drwxr-xr-x 2 root root 4096 2008-04-24 02:09 games
drwxr-xr-x 9 root root 4096 2008-10-06 06:32 include
drwxr-xr-x 120 root root 53248 2008-10-06 06:32 lib
drwxr-xr-x 3 root root 4096 2008-06-13 06:42 lib32
drwxr-xr-x 10 root root 4096 2008-04-24 02:04 local
Now du shows (in 1k blocks)
147528 bin
40 games
740 include
395440 lib
8 lib32
152 local
It is true that the size of the directory file grows as needed as you create more files. And when files are moved or deleted, the records in this file are marked "unused". So the file does not shrink in size.
But normally this difference is so small, it's not worth worrying about. In the above case, /usr/bin contains 147M, but the size of the bin file is 32K.
Trying to recover 8K of space on a 200GB disk is silly. But you are correct.
Rikishi42
Completely.
I have a directory in which sometimes 150.000+ files arrive, before they get filed to their proper directories.This dir reaches 3.6 MB at times. So far, the only way I've found to reduce
it, is to remove and recreate it each time my procedure has emptied it.
The Natural philosopher
Yes. that is correct on the file systems I have used. There is no 'garbage collection" on directories as such. Just the contents.It may not be correct on all filesystems tho, I dont know.
John hassler
Rikishi42 writes:
> I have a directory in which sometimes 150.000+ files arrive, before they
> get filed to their proper directories.
> This dir reaches 3.6 MB at times.
How large is the partition?
> So far, the only way I've found to reduce it, is to remove and recreate
> it each time my procedure has emptied it.
What do you gain by reducing it? It's just going to grow back to 3.6M next time you receive 150,000 files. Think of it as space pre-allocated for directory entries. You do realize that the slots get reused?
Rikshi42
>> I have a directory in which sometimes 150.000+ files arrive, before they
>> get filed to their proper directories.
>> This dir reaches 3.6 MB at times.
> How large is the partition?
The use of space was not really the point. Sorry, I should have explained that. Imagine how sluggish a dir with 150.000 files feels. The 'problem' is that when the directory is emptied, it stays just as slow.
>> So far, the only way I've found to reduce it, is to remove and recreate
>> it each time my procedure has emptied it.
> What do you gain by reducing it? It's just going to grow back to 3.6M next
> time you receive 150,000 files. Think of it as space pre-allocated for
> directory entries. You do realize that the slots get reused?
I gain speed, mostly.And all my file arrivals are not that big.
this is what i have read in a book
Because Linux directories do not shrink automatically, removing a file from a directory does not shrink the directory, even though it frees up space on the disk.remove unused space and make a directory smaller, you must copy or move all the files to a new directory and remove the original directory.
maxwell lol
There are two different ways to think of the size of a directory.One is the size based on the contents of the directory. du shows this.
The second is the size of the "file" that contains the inode information of the files inside the directory. Don't get these confused.The inode information gives the name of the files inside the directory.
If you do a ls -ld dirname, ls will show you the size of this file.
For instance:
drwxr-xr-x 2 root root 36864 2008-10-13 06:59 bin
drwxr-xr-x 2 root root 4096 2008-04-24 02:09 games
drwxr-xr-x 9 root root 4096 2008-10-06 06:32 include
drwxr-xr-x 120 root root 53248 2008-10-06 06:32 lib
drwxr-xr-x 3 root root 4096 2008-06-13 06:42 lib32
drwxr-xr-x 10 root root 4096 2008-04-24 02:04 local
Now du shows (in 1k blocks)
147528 bin
40 games
740 include
395440 lib
8 lib32
152 local
It is true that the size of the directory file grows as needed as you create more files. And when files are moved or deleted, the records in this file are marked "unused". So the file does not shrink in size.
But normally this difference is so small, it's not worth worrying about. In the above case, /usr/bin contains 147M, but the size of the bin file is 32K.
Trying to recover 8K of space on a 200GB disk is silly. But you are correct.
Rikishi42
Completely.
I have a directory in which sometimes 150.000+ files arrive, before they get filed to their proper directories.This dir reaches 3.6 MB at times. So far, the only way I've found to reduce
it, is to remove and recreate it each time my procedure has emptied it.
The Natural philosopher
Yes. that is correct on the file systems I have used. There is no 'garbage collection" on directories as such. Just the contents.It may not be correct on all filesystems tho, I dont know.
John hassler
Rikishi42 writes:
> I have a directory in which sometimes 150.000+ files arrive, before they
> get filed to their proper directories.
> This dir reaches 3.6 MB at times.
How large is the partition?
> So far, the only way I've found to reduce it, is to remove and recreate
> it each time my procedure has emptied it.
What do you gain by reducing it? It's just going to grow back to 3.6M next time you receive 150,000 files. Think of it as space pre-allocated for directory entries. You do realize that the slots get reused?
Rikshi42
>> I have a directory in which sometimes 150.000+ files arrive, before they
>> get filed to their proper directories.
>> This dir reaches 3.6 MB at times.
> How large is the partition?
The use of space was not really the point. Sorry, I should have explained that. Imagine how sluggish a dir with 150.000 files feels. The 'problem' is that when the directory is emptied, it stays just as slow.
>> So far, the only way I've found to reduce it, is to remove and recreate
>> it each time my procedure has emptied it.
> What do you gain by reducing it? It's just going to grow back to 3.6M next
> time you receive 150,000 files. Think of it as space pre-allocated for
> directory entries. You do realize that the slots get reused?
I gain speed, mostly.And all my file arrivals are not that big.
Thursday, October 16, 2008
etc/init.d vs /etc/event.d
Most of the files in the /etc/rcn.d and /etc/init.d directories will go away Ubuntu emulates runlevels using Upstart to aid migration and provide compatibility with software for other distributions. This section explains how init scripts work with (emulated) runlevels to control system services. The /etc/rcn.d and the /etc/init.d directories described in this section will largely be empty by the release of Ubuntu Gutsy+2 (the second Ubuntu release following Gutsy), the links in these directories having been replaced by job control files in /etc/event.d
ssh logs
I can't find my SSH logs? I checked in /var/log/, and didn't find them? Anyone know where to look (or how to enable them)? If the logs aren't on by default, I really think they should be.
hyperair
/var/log/auth.log is probably what you're looking for. When someone logs in through SSH it's shown there. If you're just looking for last commands executed then go check ~/.bash_history.
Dr small
cat /var/log/auth.log | grep sshd
shane2peru
cat /var/log/auth.log | grep sshd
hyperair
/var/log/auth.log is probably what you're looking for. When someone logs in through SSH it's shown there. If you're just looking for last commands executed then go check ~/.bash_history.
Dr small
cat /var/log/auth.log | grep sshd
shane2peru
cat /var/log/auth.log | grep sshd
Wednesday, October 15, 2008
'B core file'
Hi
This is what i have read in the book A practical guide to ubuntu Linux
sudo find / -type f -name core | xargs file | grep 'B core file' | sed 's/:ELF.*//g' | xargs rm -f
The find command lists all ordinary files named core and sends its output to xargs,which runs file on each of the files in the list. The file utility displays a string that includes B core file for files created as the result of a core dump. These files need to be removed. The grep command filters out from file any lines that do not contain this string. Finally sed removes everything following the colon so that all that is left on the line is the pathname of the core file; xargs then removes the file.
can any one explain me what exactly is meant by the string/search pattern 'B core file' ?
pinniped
You're looking for the string 'B core file'.
a. 'find' looks for any file named 'core'
b. 'xargs' generates a 'file /path/to/nth/core' list to process
c. 'grep' finds the string 'B core file' which essentially identifies each file named 'core' which is actually a core dump
d. 'sed' strips the string
e. 'xargs' takes the result of the sed part and uses it to create a command line to delete the offending file.
For example, I used 'ulimit' to allow a core dump, made a busy-loop program, ran it, and sent it a SIGSEGV to get a file named 'core'. Now doing "file core" gives me:
core: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from './a.out'
Note that 'B core file' matches "LSB core file" in the result from 'file'.
That 'sed' part is doing nothing on my machine using that output though - the sed command is a global substitution which is meant to remove a string - but I'm no sed expert so I don't know what's going wrong.
[edit] OK, OK - it's obvious that 'sed' should strip the colon and all other text following 'core' in that example line above. To limit accidental matches, a wider pattern is used ": ELF.*" which means "match ': ELF' and everything which follows". SO, in your post you're leaving out that very important space between ':' and 'ELF'.
This is what i have read in the book A practical guide to ubuntu Linux
sudo find / -type f -name core | xargs file | grep 'B core file' | sed 's/:ELF.*//g' | xargs rm -f
The find command lists all ordinary files named core and sends its output to xargs,which runs file on each of the files in the list. The file utility displays a string that includes B core file for files created as the result of a core dump. These files need to be removed. The grep command filters out from file any lines that do not contain this string. Finally sed removes everything following the colon so that all that is left on the line is the pathname of the core file; xargs then removes the file.
can any one explain me what exactly is meant by the string/search pattern 'B core file' ?
pinniped
You're looking for the string 'B core file'.
a. 'find' looks for any file named 'core'
b. 'xargs' generates a 'file /path/to/nth/core' list to process
c. 'grep' finds the string 'B core file' which essentially identifies each file named 'core' which is actually a core dump
d. 'sed' strips the string
e. 'xargs' takes the result of the sed part and uses it to create a command line to delete the offending file.
For example, I used 'ulimit' to allow a core dump, made a busy-loop program, ran it, and sent it a SIGSEGV to get a file named 'core'. Now doing "file core" gives me:
core: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from './a.out'
Note that 'B core file' matches "LSB core file" in the result from 'file'.
That 'sed' part is doing nothing on my machine using that output though - the sed command is a global substitution which is meant to remove a string - but I'm no sed expert so I don't know what's going wrong.
[edit] OK, OK - it's obvious that 'sed' should strip the colon and all other text following 'core' in that example line above. To limit accidental matches, a wider pattern is used ": ELF.*" which means "match ': ELF' and everything which follows". SO, in your post you're leaving out that very important space between ':' and 'ELF'.
Tuesday, October 14, 2008
why is it said that /etc/mtab file should not be deleted ?
Any file maintained by a program should not be deleted unless said program allows for such deletion. See man mount.
Why is it said that /var should reside on a separate partition from /usr ?
iacullalad
Because the /var sub-directory contain all variable files and temporary files created by the logged user. This files inlcudes temporary storage files downloaded from the Internet, log files, print spooling. While /usr sub-directory contains all user related libraries, applications. Such sub-directories cannot be interchanged.
Francois Patte
For instance: /var is the place where logs are written, suppose some process becomes mad and fill the partition with logs....
Grant
And /usr may be mounted read-only.
Keith Keller
One reason to separate /usr from the rest of the filesystem is to make upgrades easier; you can mke2fs the /usr partition to wipe all vestiges of old binaries clean and start fresh. (Grant already mentioned another reason, the ability to mount /usr read-only.)
John hasler
Another reason is to eliminate write activity on the partition containing /usr thereby increasing reliability
Mark hobley
And /var may be mounted noexec.
The natural Philosopher
Yes. /var is for /variable/ data. Logs and often databases live there.
So it can grow and possibly exceed limits: having it separate from the parts that are necessary for recovery from such a state, means you CAN recover..
Andrew Halliwel
Because the root "/" filesystem should never be allowed to fill up.If it does, all kinds of nastiness can occur.
And var is one of the partitions on which programs dump their data,especially e-mail and news, web proxies, log files, etc.
It's a safeguard.
Unruh
Many things are said. Not all are sensible. Anyway, /var/ is written to. /usr is in general not. So /var can fill up.
Nico kadel Gracia
And /var/spool/mail, /var/spool/news, /var/spool/mqueue and /var/tmp/. Any of those may be overflowed quite badly.
The separation of /var also goes back to the days of much smaller disks, when a modest mail spool would be wise to put on a separate disk or partition
Because the /var sub-directory contain all variable files and temporary files created by the logged user. This files inlcudes temporary storage files downloaded from the Internet, log files, print spooling. While /usr sub-directory contains all user related libraries, applications. Such sub-directories cannot be interchanged.
Francois Patte
For instance: /var is the place where logs are written, suppose some process becomes mad and fill the partition with logs....
Grant
And /usr may be mounted read-only.
Keith Keller
One reason to separate /usr from the rest of the filesystem is to make upgrades easier; you can mke2fs the /usr partition to wipe all vestiges of old binaries clean and start fresh. (Grant already mentioned another reason, the ability to mount /usr read-only.)
John hasler
Another reason is to eliminate write activity on the partition containing /usr thereby increasing reliability
Mark hobley
And /var may be mounted noexec.
The natural Philosopher
Yes. /var is for /variable/ data. Logs and often databases live there.
So it can grow and possibly exceed limits: having it separate from the parts that are necessary for recovery from such a state, means you CAN recover..
Andrew Halliwel
Because the root "/" filesystem should never be allowed to fill up.If it does, all kinds of nastiness can occur.
And var is one of the partitions on which programs dump their data,especially e-mail and news, web proxies, log files, etc.
It's a safeguard.
Unruh
Many things are said. Not all are sensible. Anyway, /var/ is written to. /usr is in general not. So /var can fill up.
Nico kadel Gracia
And /var/spool/mail, /var/spool/news, /var/spool/mqueue and /var/tmp/. Any of those may be overflowed quite badly.
The separation of /var also goes back to the days of much smaller disks, when a modest mail spool would be wise to put on a separate disk or partition
Mount removable devices with the nosuid option
Always mount removable devices with the nosuid option so that a malicious user cannot, for example, put a setuid copy of bash on a disk and have a shell with root privileges. By default, Ubuntu uses the nosuid option when mounting removable media.
ACL
why it is said that enabling ACLs(access control lists) can reduce performance or do not enable ACL on file systems that holds system files, where traditional Linux permissions are sufficient?
Well, one could imagine that if you enable ACL it is just one more thing for the system to check and thus there will be a performance hit.
In reality, I do not notice a difference in speed or performance with SELinux active vs inactive (SELinux is basically a system wide ACL). I seriously doubt you will notice any performance difference with any modern hardware after enabling ACL (just my 2c).
Well, one could imagine that if you enable ACL it is just one more thing for the system to check and thus there will be a performance hit.
In reality, I do not notice a difference in speed or performance with SELinux active vs inactive (SELinux is basically a system wide ACL). I seriously doubt you will notice any performance difference with any modern hardware after enabling ACL (just my 2c).
Going to Recovery (Single-User) Mode
The following steps describe a method of manually bringing the system down to recovery mode—the point where it is safe to turn the power off. Make sure you give other users enough warning before switching to recovery mode; otherwise they may lose the data they are working on. Because going from multiuser to recovery mode can affect other users, you must work with root privileges to perform all of these tasks except the first.
1. Use wall to warn everyone who is using the system to log out.
2.
If you are sharing files via NFS, use exportfs –ua to disable network access to the shared filesystems. (Use exportfs without an argument to see which filesystems are being shared.)
3.
Confirm no critical processes are running in the background (e.g., an unattended compile).
4.
Give the command telinit 1 (page 510) to bring the system down to recovery mode. The system displays messages about the services it is shutting down followed by a root shell prompt (#). In runlevel 1, the system kills many system services and then brings the system to runlevel S. The runlevel utility confirms the system was at runlevel 1 and is now at runlevel S.
$ sudo telinit 1
...
# runlevel
1 S
5.
Use umount –a to unmount all mounted devices that are not in use. Use mount without an argument to make sure that no devices other than root (/) are mounted before continuing.
1. Use wall to warn everyone who is using the system to log out.
2.
If you are sharing files via NFS, use exportfs –ua to disable network access to the shared filesystems. (Use exportfs without an argument to see which filesystems are being shared.)
3.
Confirm no critical processes are running in the background (e.g., an unattended compile).
4.
Give the command telinit 1 (page 510) to bring the system down to recovery mode. The system displays messages about the services it is shutting down followed by a root shell prompt (#). In runlevel 1, the system kills many system services and then brings the system to runlevel S. The runlevel utility confirms the system was at runlevel 1 and is now at runlevel S.
$ sudo telinit 1
...
# runlevel
1 S
5.
Use umount –a to unmount all mounted devices that are not in use. Use mount without an argument to make sure that no devices other than root (/) are mounted before continuing.
Downloading Debian Iso's - 7 ISO's ?
when i went looking for Debian I found 7 binary-iso's, 1 NONUS-iso, an update iso etc....before i start downloading...are all seven necessary? What is a NONUS-iso?
still a newbie...looking to try another distro....help me please?
i know if i googled and read and googled and read i could find out the answers but the answers i get from LQ are worth more to me....
jtshaw
You definitely don't need all 7 iso's. The best way to do a debian install is to download a net install cd and it'll download the packages as needed in the install.
Of course that doesn't work so well if you don't have a decent internet connection. Of course downloading the 7 iso's isn't going to be much better if you don't have a decent internet connection....
still a newbie...looking to try another distro....help me please?
i know if i googled and read and googled and read i could find out the answers but the answers i get from LQ are worth more to me....
jtshaw
You definitely don't need all 7 iso's. The best way to do a debian install is to download a net install cd and it'll download the packages as needed in the install.
Of course that doesn't work so well if you don't have a decent internet connection. Of course downloading the 7 iso's isn't going to be much better if you don't have a decent internet connection....
Monday, October 13, 2008
perm -4000
sulekha
I have read that
To check for a possible Trojan horse, examine the filesystem periodically for files with setuid permission. The following command lists these files:
Listing setuid files $ sudo find / -perm -4000 -exec ls -lh {} \; 2> /dev/null
can any one explain,why the permission is given as 4000 in this command
AFAIK i haven't seen any files with premission 4000
jiliagre
You overlook the "-" preceeding 4000. That means the permissions need not to be exactly 04000 but only the bits set in 04000 need to be set too in the tested file.
04000 means precisely the setuid bit.
cariboo907
Permissions of 4000 just means to set user ID on execution. A better way to scan for rootkits is to install rkhunter. Rkhunter scans for rootkits daily and emails you the results.
Ex:
suppose i have a program named myprogram and i want to make it setuid root
chown root myprogram
chmod 4755 myprogram
and to make it setgid root
chown root myprogram
chmod 2755 myprogram
some common setuid programs are ping,mount,traceroute, su etc
I have read that
To check for a possible Trojan horse, examine the filesystem periodically for files with setuid permission. The following command lists these files:
Listing setuid files $ sudo find / -perm -4000 -exec ls -lh {} \; 2> /dev/null
can any one explain,why the permission is given as 4000 in this command
AFAIK i haven't seen any files with premission 4000
jiliagre
You overlook the "-" preceeding 4000. That means the permissions need not to be exactly 04000 but only the bits set in 04000 need to be set too in the tested file.
04000 means precisely the setuid bit.
cariboo907
Permissions of 4000 just means to set user ID on execution. A better way to scan for rootkits is to install rkhunter. Rkhunter scans for rootkits daily and emails you the results.
Ex:
suppose i have a program named myprogram and i want to make it setuid root
chown root myprogram
chmod 4755 myprogram
and to make it setgid root
chown root myprogram
chmod 2755 myprogram
some common setuid programs are ping,mount,traceroute, su etc
Saturday, October 11, 2008
pidof
The pidof utility displays the PID number of each process running the command you pidof specify:
$ pidof nautilus
$ pidof nautilus
.dmrc file being ignored, should be owned by user and certain permissions.. how do i?
mitchel7man
Hello, when starting ubuntu edgy eft 6.10 i get this error before gnome loads,"User's $HOME/.dmrc file is being ignored, preventing default session and language from being saved. File should be owned by user and have 644 permissions. User's $HOME directory must be owned by user and not writable by other users." What exactly do i need to do? ... and how? Thanks in advance,
aysiu
Boot into recovery mode.
Type Code:
chown -R mitchell:mitchell /home/mitchell
chmod 644 /home/mitchell/.dmrc
reboot
mitchel7man
i followed your advice, but upon boot i still get the error?, any ideas what may have went wrong or what to do?
aysiu
Well, the original error says it has to be not only 644 permissions but also owned by the user.
Reboot into recovery mode and try:
Code:
chown -R mitchell:mitchell /home/mitchell
reboot
mitchel7man
this is wierd, i boot the recovery mode, type root password and execute the commands, to which i do not get any feedback (good or bad).....and then i boot recovery to gnome and execute the commands in terminal (sudo chown -R mitchellj:mitchellj /home/mitchellj and chmod 644 /home/mitchellj/.dmrc) both of which execute with no feedback negative or positive, just another file line or whatever its called the thing ending in $: where you type the command...... i restart and i still get the error?, am i doing it wrong or whats up?>
aysiu
If there's no feedback, negative or positive, it means the command executed successfully. I'm not sure what might be wrong.
mitchell7man
"User's $HOME/.dmrc file is being ignored, preventing default session and language from being saved. File should be owned by user and have 644 permissions. User's $HOME directory must be owned by user and not writable by other users."
Hello, the commands you game me run fine, but i still get the error, from what i understand up to this point, i have changed the owner of home to me, and chaned the mode of .dmrc to 644, but is there something else i have to do to make .dmrc not be ignored? Thanks very much,
michalxo
Code: chmod 700 ~/
mitchell7man
thanks
Hello, when starting ubuntu edgy eft 6.10 i get this error before gnome loads,"User's $HOME/.dmrc file is being ignored, preventing default session and language from being saved. File should be owned by user and have 644 permissions. User's $HOME directory must be owned by user and not writable by other users." What exactly do i need to do? ... and how? Thanks in advance,
aysiu
Boot into recovery mode.
Type Code:
chown -R mitchell:mitchell /home/mitchell
chmod 644 /home/mitchell/.dmrc
reboot
mitchel7man
i followed your advice, but upon boot i still get the error?, any ideas what may have went wrong or what to do?
aysiu
Well, the original error says it has to be not only 644 permissions but also owned by the user.
Reboot into recovery mode and try:
Code:
chown -R mitchell:mitchell /home/mitchell
reboot
mitchel7man
this is wierd, i boot the recovery mode, type root password and execute the commands, to which i do not get any feedback (good or bad).....and then i boot recovery to gnome and execute the commands in terminal (sudo chown -R mitchellj:mitchellj /home/mitchellj and chmod 644 /home/mitchellj/.dmrc) both of which execute with no feedback negative or positive, just another file line or whatever its called the thing ending in $: where you type the command...... i restart and i still get the error?, am i doing it wrong or whats up?>
aysiu
If there's no feedback, negative or positive, it means the command executed successfully. I'm not sure what might be wrong.
mitchell7man
"User's $HOME/.dmrc file is being ignored, preventing default session and language from being saved. File should be owned by user and have 644 permissions. User's $HOME directory must be owned by user and not writable by other users."
Hello, the commands you game me run fine, but i still get the error, from what i understand up to this point, i have changed the owner of home to me, and chaned the mode of .dmrc to 644, but is there something else i have to do to make .dmrc not be ignored? Thanks very much,
michalxo
Code: chmod 700 ~/
mitchell7man
thanks
.dmrc file
~/.dmrc file, which stores the user’s default session and language, and launch the user’s session.
N.B tested in ubuntu 8.04
N.B tested in ubuntu 8.04
unlocking ande Re-locking root account, in ubuntu
Except for a few instances, there is no need to unlock the root account on an Ubuntu system and Ubuntu suggests that you do not do so. The following command unlocks the root account by assigning a password to it:
$ sudo passwd root
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Relocking the root
If you decide you want to lock the root account after unlocking it, give the command sudo passwd –l root. You can unlock it again with the preceding command. to unlock:- passwd -u useraccount
tested in ubuntu 8.04
$ sudo passwd root
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Relocking the root
If you decide you want to lock the root account after unlocking it, give the command sudo passwd –l root. You can unlock it again with the preceding command. to unlock:- passwd -u useraccount
tested in ubuntu 8.04
Always use visudo to edit the sudoers file
A syntax error in the sudoers file can prevent you from using sudo to gain root privileges. If you
edit this file directly (without using visudo), you will not know that you introduced a syntax error
until you find you cannot use sudo. The visudo utility checks the syntax of sudoers before it
allows you to exit. If it finds an error, it gives you the choice of fixing the error, exiting without saving the changes to the file, or saving the changes and exiting. The last is usually a poor choice, so visudo marks the last choice with (danger!).
edit this file directly (without using visudo), you will not know that you introduced a syntax error
until you find you cannot use sudo. The visudo utility checks the syntax of sudoers before it
allows you to exit. If it finds an error, it gives you the choice of fixing the error, exiting without saving the changes to the file, or saving the changes and exiting. The last is usually a poor choice, so visudo marks the last choice with (danger!).
Friday, October 10, 2008
spawning a root shell
When you have several commands you need to run with root privileges, it may be easier to spawn a root shell, give the commands without having to type sudo in front of each one, and exit from the shell. This technique defeats some of the safeguards built in to sudo, so use it carefully and remember to return to a nonroot shell as soon as possible. Use the sudo –i option to spawn a root shell:
$ pwd
/home/sam
$ sudo -i
# id
uid=0(root) gid=0(root) groups=0(root)
# pwd
/root
# exit
Tested on ubuntu 8.04
$ pwd
/home/sam
$ sudo -i
# id
uid=0(root) gid=0(root) groups=0(root)
# pwd
/root
# exit
Tested on ubuntu 8.04
cleaning APT cache in ubuntu 8.04
recently i read in a book as follows
cleaning APT cache
edit /etc/apt/apt.conf.d/10periodic as follows
APT::Periodic:Download-Upgradeable-Packages "1";
APT::Periodic::AutocleanInterval "0";
APT::Archives::MaxAge "7";
buit in my /etc/apt/apt.conf.d/10periodic file i am finding only these entries
APT::Periodic::Update-Package-Lists "1";
APT::Periodic:Download-Upgradeable-Packages "1";
APT::Periodic::AutocleanInterval "7";
there is no APT::Archives::MaxAge "7"; should i add it ?
cariboo907
Look in /etc/apt/apt.conf.d/20archive, that is where the
APT::Archives::MaxAge 30
directive is set
I'm using the Intrepid beta mine is set a 30.
cleaning APT cache
edit /etc/apt/apt.conf.d/10periodic as follows
APT::Periodic:Download-Upgradeable-Packages "1";
APT::Periodic::AutocleanInterval "0";
APT::Archives::MaxAge "7";
buit in my /etc/apt/apt.conf.d/10periodic file i am finding only these entries
APT::Periodic::Update-Package-Lists "1";
APT::Periodic:Download-Upgradeable-Packages "1";
APT::Periodic::AutocleanInterval "7";
there is no APT::Archives::MaxAge "7"; should i add it ?
cariboo907
Look in /etc/apt/apt.conf.d/20archive, that is where the
APT::Archives::MaxAge 30
directive is set
I'm using the Intrepid beta mine is set a 30.
sudo log file
i have read in a book that the sudo utility logs all commands it executes. this log can be useful for retracing your steps if you make a mistake and for system auditing- what is the name of this log file and where it is located ?
NB:- i use ubuntu 8.04
jerome
/var/log/auth.log
and the older copies will be
/var/log/auth.log.0
/var/log/auth.log.1.gz
/var/log/auth.log.2.gz
etc...
NB:- i use ubuntu 8.04
jerome
/var/log/auth.log
and the older copies will be
/var/log/auth.log.0
/var/log/auth.log.1.gz
/var/log/auth.log.2.gz
etc...
Run graphical programs using gksudo not sudo
Use gksudo (or kdesu from KDE) instead of sudo when you run a graphical program that requires root privileges. Although both utilities run a program with root privileges, sudo uses your configuration files, whereas gksudo uses root’s configuration files. Most of the time this difference is not important, but sometimes it is critical. Some programs will not run when you call them with sudo. Using gksudo can prevent incorrect permissions from being applied to files related to the X Window System in your home directory. In a few cases, misapplying these permissions can prevent you from logging back in. In addition, you can use gksudo in a launcher on the
desktop or on a panel.
desktop or on a panel.
OpenSSH Root user account restriction - revisited
Posted By nixcraft On August 2, 2006 @ 7:13 pm In Linux, Linux login control, PAM, Security | 7 Comments
Open SSH Logo
One of our [1] article generated few more question regarding root login issues over ssh session. One of reader ([2] eMBee) asks, "I need something that allows me to say: allow any users except root from anywhere, and root only from localhost. (over ssh session)".
PAM offers very powerful authentication control. You need to use the pam_access PAM module, which is mainly for access management. It provides login access control based on
* Login names
* Host or domain names
* Internet addresses or network IP numbers
* Terminal line names etc
Why pam_access matters?
On a production server, authorized login can come from any networked computer. Therefore, it is important to have tight control over users who are allowed to connect server via OpenSSH server.
How do I configure pam_access?
You need to edit following files:
1. /etc/pam.d/sshd - Linux PAM configuration file.
2. /etc/security/access.conf - By default rules for access management are taken from configuration this file. When someone logs in, the entry in this scanned and matched against rule. You can specify whether the login will be accepted or refused to user. General syntax is as follows:
permission : username: origins
Where,
* permission : Permission field should be a "+" (access granted) or "-" (access denied)
character.
* username : Linux system username/login name such as root, vivek etc. You can also specify group names. You can also use special keywod ALL (to match all username).
* origins : It is a list of one ore more tty names, host name, IP address, domain names that begin with . or special key words ALL or LOCAL
Let us say you want to allow user root and vivek login from IP address 202.54.1.20 only.
Open file /etc/security/access.conf
# vi /etc/security/access.conf
Append following line:
-: ALL EXCEPT root vivek:202.54.1.20
Save the file and Open /etc/pam.d/sshd file :
# vi /etc/pam.d/sshd
Append following entry
account required pam_access.so
Save and close the file.
Now ssh will only accept login access from root/vivek from IP address 202.54.1.20. Now if user vivek (or root) try to login ssh server from IP address 203.111.12.3 he will get
'Connection closed by xxx.xxx.xx.xx'; error and following log entry should be written to your log file:
# tailf /var/log/message
Output:
Aug 2 19:02:39 web02 pam_access[2091]: access denied for user `vivek' from `203.111.12.3'
Remember, as soon as you save changes to /etc/security/access.conf, they are applied by PAM configuration. So be careful when writing rules.
More examples
a) I need something that allows me to say: allow any users except root from anywhere, and root only from localhost.
-:root:ALL EXCEPT LOCAL
OR
-:root:ALL EXCEPT localhost
b) Deny network and local login to all users except for user root and vivek:
-:ALL EXCEPT root vivek:ALL
c) Only allow root user login from 192.168.1.0/24 network:
+ : root : 192.168.1.0/24
Please note that this kind of restriction can be applied to any PAM aware application/service such as ftpd, telnet etc.
Article printed from nixCraft: http://www.cyberciti.biz/tips
URL to article: http://www.cyberciti.biz/tips/openssh-root-user-account-restriction-revisited.html
URLs in this post:
[1] article: http://www.cyberciti.biz/tips/openssh-deny-or-restrict-access-to-users-and-groups.html
[2] eMBee: http://www.cyberciti.biz/tips/openssh-deny-or-restrict-access-to-users-and-groups.html#comment-3987
Open SSH Logo
One of our [1] article generated few more question regarding root login issues over ssh session. One of reader ([2] eMBee) asks, "I need something that allows me to say: allow any users except root from anywhere, and root only from localhost. (over ssh session)".
PAM offers very powerful authentication control. You need to use the pam_access PAM module, which is mainly for access management. It provides login access control based on
* Login names
* Host or domain names
* Internet addresses or network IP numbers
* Terminal line names etc
Why pam_access matters?
On a production server, authorized login can come from any networked computer. Therefore, it is important to have tight control over users who are allowed to connect server via OpenSSH server.
How do I configure pam_access?
You need to edit following files:
1. /etc/pam.d/sshd - Linux PAM configuration file.
2. /etc/security/access.conf - By default rules for access management are taken from configuration this file. When someone logs in, the entry in this scanned and matched against rule. You can specify whether the login will be accepted or refused to user. General syntax is as follows:
permission : username: origins
Where,
* permission : Permission field should be a "+" (access granted) or "-" (access denied)
character.
* username : Linux system username/login name such as root, vivek etc. You can also specify group names. You can also use special keywod ALL (to match all username).
* origins : It is a list of one ore more tty names, host name, IP address, domain names that begin with . or special key words ALL or LOCAL
Let us say you want to allow user root and vivek login from IP address 202.54.1.20 only.
Open file /etc/security/access.conf
# vi /etc/security/access.conf
Append following line:
-: ALL EXCEPT root vivek:202.54.1.20
Save the file and Open /etc/pam.d/sshd file :
# vi /etc/pam.d/sshd
Append following entry
account required pam_access.so
Save and close the file.
Now ssh will only accept login access from root/vivek from IP address 202.54.1.20. Now if user vivek (or root) try to login ssh server from IP address 203.111.12.3 he will get
'Connection closed by xxx.xxx.xx.xx'; error and following log entry should be written to your log file:
# tailf /var/log/message
Output:
Aug 2 19:02:39 web02 pam_access[2091]: access denied for user `vivek' from `203.111.12.3'
Remember, as soon as you save changes to /etc/security/access.conf, they are applied by PAM configuration. So be careful when writing rules.
More examples
a) I need something that allows me to say: allow any users except root from anywhere, and root only from localhost.
-:root:ALL EXCEPT LOCAL
OR
-:root:ALL EXCEPT localhost
b) Deny network and local login to all users except for user root and vivek:
-:ALL EXCEPT root vivek:ALL
c) Only allow root user login from 192.168.1.0/24 network:
+ : root : 192.168.1.0/24
Please note that this kind of restriction can be applied to any PAM aware application/service such as ftpd, telnet etc.
Article printed from nixCraft: http://www.cyberciti.biz/tips
URL to article: http://www.cyberciti.biz/tips/openssh-root-user-account-restriction-revisited.html
URLs in this post:
[1] article: http://www.cyberciti.biz/tips/openssh-deny-or-restrict-access-to-users-and-groups.html
[2] eMBee: http://www.cyberciti.biz/tips/openssh-deny-or-restrict-access-to-users-and-groups.html#comment-3987
where is my Temporary Internet Files in ubuntu ?
I am working on ubuntu i want to know how to delete temperary internet files in ubuntu ?
shirilover
Your browser cache depends on which browser you use.For Firefox, it is typically ~/.mozilla/firefox/xxxxxxxx.default/Cache
For Opera, I believe it is ~/.opera/cache4
HTH
If you are using Firefox you must go to the menu:
Edit->Preferences->(Privacy Tab) Private Data and select "Always Clear my private data when i close Firefox":if you press the Settings button you can customize it.
shirilover
Your browser cache depends on which browser you use.For Firefox, it is typically ~/.mozilla/firefox/xxxxxxxx.default/Cache
For Opera, I believe it is ~/.opera/cache4
HTH
If you are using Firefox you must go to the menu:
Edit->Preferences->(Privacy Tab) Private Data and select "Always Clear my private data when i close Firefox":if you press the Settings button you can customize it.
Remove unnecessary locale data
For this we need to install localepurge.Automagically remove unnecessary locale data.This is just a simple script to recover diskspace wasted for unneeded locale files and localized man pages. It will automagically be invoked upon completion of any apt installation run.
Install localepurge in Ubuntu
sudo apt-get install localepurge
After installing anything with apt-get install, localepurge will remove all translation files and translated man pages in languages you cannot read.
If you want to configure localepurge you need to edit /etc/locale.nopurge
This can save you several megabytes of disk space, depending on the packages you have installed.
Example:-
I am trying to install dicus using apt-get
sudo apt-get install discus
after end of this installation you can see something like below
localepurge: Disk space freed in /usr/share/locale: 41860K
Install localepurge in Ubuntu
sudo apt-get install localepurge
After installing anything with apt-get install, localepurge will remove all translation files and translated man pages in languages you cannot read.
If you want to configure localepurge you need to edit /etc/locale.nopurge
This can save you several megabytes of disk space, depending on the packages you have installed.
Example:-
I am trying to install dicus using apt-get
sudo apt-get install discus
after end of this installation you can see something like below
localepurge: Disk space freed in /usr/share/locale: 41860K
Remove “orphaned” packages
If you want to remove orphaned packages you need to install deborphan package.
Install deborphan in Ubuntu
sudo apt-get install deborphan
Using deborphan
Open Your terminal and enter the following command
sudo deborphan | xargs sudo apt-get -y remove --purge
Install deborphan in Ubuntu
sudo apt-get install deborphan
Using deborphan
Open Your terminal and enter the following command
sudo deborphan | xargs sudo apt-get -y remove --purge
Thursday, October 9, 2008
bashrc vs. bash_profile
mikeshn
bashrc, bash_profile These two files looks pretty similar. What is the purpose of these files ?
bogdan
I got these lines from bash manual.
~/.bash_profile
The personal initialization file, executed for login shells
~/.bashrc
The individual per-interactive-shell startup file
To be more clear:
.bashrc will be executed everytime you open the terminal from within Gnome, as you are not making a login.
.bash_profile is executed after login to a terminal. In the case of desktop systems, if I am not mistaking, this file is executed after the graphical login, just once. If you were to boot without graphical mode, after you were presented with bash login prompt, this file would be executed after successful login.
geirha
No, opening a terminal in gnome will also give you a login shell. .bash_profile is read when you open a terminal, log in at the console or specifically start a loginshell with Code:
bash --login
.bashrc is read for non-login shells. Typically that means when you run bash-scripts; it spawns a new bash process to run the script, but it only reads .bashrc.
The .bash_profile provided to you with ubuntu, will run .bashrc too, so .bashrc is "always" read.
kpkeerthi
The commands in .bash_profile will be run every time the user logs in.
The commands in .bashrc will be run everytime you open bash shell (the terminal). .bash_profile will not be run in this case.
* Usually .bash_profile internally invokes .bashrc. So .bashrc will also be run once upon login.

bashrc, bash_profile These two files looks pretty similar. What is the purpose of these files ?
bogdan
I got these lines from bash manual.
~/.bash_profile
The personal initialization file, executed for login shells
~/.bashrc
The individual per-interactive-shell startup file
To be more clear:
.bashrc will be executed everytime you open the terminal from within Gnome, as you are not making a login.
.bash_profile is executed after login to a terminal. In the case of desktop systems, if I am not mistaking, this file is executed after the graphical login, just once. If you were to boot without graphical mode, after you were presented with bash login prompt, this file would be executed after successful login.
geirha
No, opening a terminal in gnome will also give you a login shell. .bash_profile is read when you open a terminal, log in at the console or specifically start a loginshell with Code:
bash --login
.bashrc is read for non-login shells. Typically that means when you run bash-scripts; it spawns a new bash process to run the script, but it only reads .bashrc.
The .bash_profile provided to you with ubuntu, will run .bashrc too, so .bashrc is "always" read.
kpkeerthi
The commands in .bash_profile will be run every time the user logs in.
The commands in .bashrc will be run everytime you open bash shell (the terminal). .bash_profile will not be run in this case.
* Usually .bash_profile internally invokes .bashrc. So .bashrc will also be run once upon login.

Why is it said that if you are using grub as the boot loader, make the file system that holds the /boot partition an ext2 filesystem ?
syg00
Was more of an issue in the past. The boot partition doesn't get updated very often, so it was thought there was no need for a journal on the f/s. Hence ext2 rather than ext3 - also saves a little space.These days with large cheap disks, just allocate a decent amount of space (say 50-100 Meg), and don't worry about it; use ext3 if you prefer.
Herman
If you are installing in a very small hard disk there is a small savings in hard disk space to be gained by only using ext2 instead of ext3.Remember, it wasn't all that long ago when an 8 GB hard disk was considered very large. The original hard disk in my old 'Book PC was 6 GB and I dual booted Windows 98 and Ubuntu Warty Warthog in 3.0 GB each. Now we are used to 80 GB or 160 GB hard disks and even 500 GB hard disks and larger are available!
By modern standards, with the huge hard disks most people have now, the saving is not noticeable at all. I think that claim is out of date unless you are installing in a USB flash memory or something like that and every bit of disk space counts. (Just my personal opinion), but then you would be better off to use reiserfs anyway. There is a 6% space savings with reiserfs compared with ext2 and it is around eight to fifteen times faster than ext2 when handling files smaller than one k in size. Even flash memories these days are getting quite large.
Was more of an issue in the past. The boot partition doesn't get updated very often, so it was thought there was no need for a journal on the f/s. Hence ext2 rather than ext3 - also saves a little space.These days with large cheap disks, just allocate a decent amount of space (say 50-100 Meg), and don't worry about it; use ext3 if you prefer.
Herman
If you are installing in a very small hard disk there is a small savings in hard disk space to be gained by only using ext2 instead of ext3.Remember, it wasn't all that long ago when an 8 GB hard disk was considered very large. The original hard disk in my old 'Book PC was 6 GB and I dual booted Windows 98 and Ubuntu Warty Warthog in 3.0 GB each. Now we are used to 80 GB or 160 GB hard disks and even 500 GB hard disks and larger are available!
By modern standards, with the huge hard disks most people have now, the saving is not noticeable at all. I think that claim is out of date unless you are installing in a USB flash memory or something like that and every bit of disk space counts. (Just my personal opinion), but then you would be better off to use reiserfs anyway. There is a 6% space savings with reiserfs compared with ext2 and it is around eight to fifteen times faster than ext2 when handling files smaller than one k in size. Even flash memories these days are getting quite large.
Wednesday, October 8, 2008
Open in Terminal selection in context menus
When you install the nautilus-open-terminal package and reboot the system, Nautilus presents
an Open in Terminal selection in context menus where appropriate. For example, with this package
installed, when you right-click a folder (directory) object and select Open in Terminal, Nautilus
opens a terminal emulator with that directory as the working directory
NB:- tested in ubuntu 8.04
an Open in Terminal selection in context menus where appropriate. For example, with this package
installed, when you right-click a folder (directory) object and select Open in Terminal, Nautilus
opens a terminal emulator with that directory as the working directory
NB:- tested in ubuntu 8.04
Wiping a file
You can use a similar technique to wipe data from a file before deleting it, making it almost impossible to recover data from the deleted file. You might want to wipe a file for security reasons.
In the following example, ls shows the size of the file named secret. Using a block size of 1 and a count corresponding to the number of bytes in secret, dd wipes the file. The conv=notrunc argument ensures that dd writes over the data in the file and not another (erroneous) place on the disk.
$ ls -l secretfile
-rw-r--r-- 1 sam sam 5733 2007-05-31 17:43 secretfile
$ dd if=/dev/urandom of=secret bs=1 count=5733 conv=notrunc
5733+0 records in
5733+0 records out
5733 bytes (5.7 kB) copied, 0.0358146 seconds, 160 kB/s
$ rm secretfile
For added security, run sync to flush the disk buffers after running dd, and repeat the two commands several times before deleting the file. See wipe.sourceforge.net for more information about wiping files
In the following example, ls shows the size of the file named secret. Using a block size of 1 and a count corresponding to the number of bytes in secret, dd wipes the file. The conv=notrunc argument ensures that dd writes over the data in the file and not another (erroneous) place on the disk.
$ ls -l secretfile
-rw-r--r-- 1 sam sam 5733 2007-05-31 17:43 secretfile
$ dd if=/dev/urandom of=secret bs=1 count=5733 conv=notrunc
5733+0 records in
5733+0 records out
5733 bytes (5.7 kB) copied, 0.0358146 seconds, 160 kB/s
$ rm secretfile
For added security, run sync to flush the disk buffers after running dd, and repeat the two commands several times before deleting the file. See wipe.sourceforge.net for more information about wiping files
Stopping the X Server
How you terminate a window manager depends on which window manager you are running and how it is configured. If X stops responding, switch to a virtual terminal, log in from another terminal or a remote system, or use ssh to access the system. Then kill the process running X. You can also press CONTROL-ALT-BACKSPACE to quit the X server. This method may not shut down the X session cleanly; use it only as a last resort.
tested on: ubuntu 8.04
tested on: ubuntu 8.04
/etc/login.defs
Configuration control definitions for the login package
Systemwide values used by user and group creation utilities such as useradd and usergroup are kept in the /etc/login.defs file. Here you will find the range of possible user and group IDs listed. UID_MIN holds the minimum number for user IDs and UID_MAX the maximum number. Various password options control password controls, such as PASS_MAX_DAYS, which determines the maximum days allowable for a password. Many password options, such as password lengths, are now handled by Pluggable Authentication Modules (PAM).
Samples of these entries are shown here:
UID_MIN 1000
MAIL_DIR /var/mail
PASS_MAX_DAYS 99999
source Richard petersen
NB : tested on ubuntu 8.04
Systemwide values used by user and group creation utilities such as useradd and usergroup are kept in the /etc/login.defs file. Here you will find the range of possible user and group IDs listed. UID_MIN holds the minimum number for user IDs and UID_MAX the maximum number. Various password options control password controls, such as PASS_MAX_DAYS, which determines the maximum days allowable for a password. Many password options, such as password lengths, are now handled by Pluggable Authentication Modules (PAM).
Samples of these entries are shown here:
UID_MIN 1000
MAIL_DIR /var/mail
PASS_MAX_DAYS 99999
source Richard petersen
NB : tested on ubuntu 8.04
Helping When a User Cannot Log In
When a user has trouble logging in on the system, the source may be a user error or a problem with the system software or hardware. The following steps can help determine where the problem is:
• Check the log files in /var/log. The /var/log/messages file accumulates
system errors, messages from daemon processes, and other important
information. It may indicate the cause or more symptoms of a problem.
Also, check the system console. Occasionally messages about system
problems that are not written to /var/log/messages (for instance, a full
disk) are displayed on the system console.
• Determine whether only that one user or only that one user’s terminal/
workstation has a problem or whether the problem is more widespread.
• Check that the user’s Caps Lock key is not on.
• Make sure the user’s home directory exists and corresponds to that user’s
entry in the /etc/passwd file. Verify that the user owns her home directory
and startup files and that they are readable (and, in the case of the home
directory, executable). Confirm that the entry for the user’s login shell in
the /etc/passwd file is accurate and the shell exists as specified.
• Change the user’s password if there is a chance that he has forgotten the
correct password.
• Check the user’s startup files (.profile, .login, .bashrc, and so on). The user
may have edited one of these files and introduced a syntax error that pre-
vents login.
• Check the terminal or monitor data cable from where it plugs into the ter-
minal to where it plugs into the computer (or as far as you can follow it).
Try turning the terminal or monitor off and then turning it back on.
• When the problem appears to be widespread, check whether you can log in
from the system console. Make sure the system is not in recovery mode. If
you cannot log in, the system may have crashed; reboot it and perform any
necessary recovery steps (the system usually does quite a bit automatically).
• If the user is logging in over a network connection, run the appropriate init
script (page 507) to restart the service the user is trying to use (e.g., ssh).
• Use df to check for full filesystems. If the /tmp filesystem or the user’s
home directory is full, login sometimes fails in unexpected ways. In some
cases you may be able to log in to a textual environment but not a graphi-
cal one. When applications that start when the user logs in cannot create
temporary files or cannot update files in the user’s home directory, the
login process itself may terminate.
• Check the log files in /var/log. The /var/log/messages file accumulates
system errors, messages from daemon processes, and other important
information. It may indicate the cause or more symptoms of a problem.
Also, check the system console. Occasionally messages about system
problems that are not written to /var/log/messages (for instance, a full
disk) are displayed on the system console.
• Determine whether only that one user or only that one user’s terminal/
workstation has a problem or whether the problem is more widespread.
• Check that the user’s Caps Lock key is not on.
• Make sure the user’s home directory exists and corresponds to that user’s
entry in the /etc/passwd file. Verify that the user owns her home directory
and startup files and that they are readable (and, in the case of the home
directory, executable). Confirm that the entry for the user’s login shell in
the /etc/passwd file is accurate and the shell exists as specified.
• Change the user’s password if there is a chance that he has forgotten the
correct password.
• Check the user’s startup files (.profile, .login, .bashrc, and so on). The user
may have edited one of these files and introduced a syntax error that pre-
vents login.
• Check the terminal or monitor data cable from where it plugs into the ter-
minal to where it plugs into the computer (or as far as you can follow it).
Try turning the terminal or monitor off and then turning it back on.
• When the problem appears to be widespread, check whether you can log in
from the system console. Make sure the system is not in recovery mode. If
you cannot log in, the system may have crashed; reboot it and perform any
necessary recovery steps (the system usually does quite a bit automatically).
• If the user is logging in over a network connection, run the appropriate init
script (page 507) to restart the service the user is trying to use (e.g., ssh).
• Use df to check for full filesystems. If the /tmp filesystem or the user’s
home directory is full, login sometimes fails in unexpected ways. In some
cases you may be able to log in to a textual environment but not a graphi-
cal one. When applications that start when the user logs in cannot create
temporary files or cannot update files in the user’s home directory, the
login process itself may terminate.
/etc/profile
The shell first executes the commands in /etc/profile. A user working with root privileges can set up this file to establish systemwide default characteristics for bash users.
Ex:- umask value set in /etc/profile page
default umask value:- umask -S
Ex:- umask value set in /etc/profile page
default umask value:- umask -S
~/.bash_logout
# ~/.bash_logout: executed by bash(1) when login shell exits.
# when leaving the console clear the screen to increase privacy
if [ "$SHLVL" = 1 ]; then
[ -x /usr/bin/clear_console ] && /usr/bin/clear_console -q
fi
# when leaving the console clear the screen to increase privacy
if [ "$SHLVL" = 1 ]; then
[ -x /usr/bin/clear_console ] && /usr/bin/clear_console -q
fi
which versus whereis
Given the name of a program, which looks through the directories in your search path, in order,
locates the program. If the search path includes more than one program with the specified name, which displays the name of only the first one (the one you would run).
The whereis utility looks through a list of standard directories and works independently of your
search path. Use whereis to locate a binary (executable) file, any manual pages, and source code
for a program you specify; whereis displays all the files it finds.
caution
Both the which and whereis utilities report only the names for commands as they are found on the disk; they do not report shell builtins (utilities that are built into a shell;
When you use whereis to try to find where the echo command (which exists as both a utility program and a shell builtin) is kept, you get the following result:
$ whereis echo
echo: /bin/echo /usr/share/man/man1/echo.1.gz
The whereis utility does not display the echo builtin. Even the which utility reports the wrong
information:
$ which echo
/bin/echo
Under bash you can use the type builtin (page 445) to determine whether a command is a builtin:
$ type echo
echo is a shell builtin
whereis examples
whereis
The whereis command performs an incredibly useful function: It tells you the paths for a command's executable program, its source files (if they exist), and its man pages. For instance, here's what you might get for KWord, the word processor in the KOffice set of programs (assuming, of course, that the binary, source, and man files are all installed):
$ whereis kword
kword: /usr/src/koffice-1.4.1/kword /usr/bin/kword/usr/bin/X11/kword usr/share/man/man1
/kword.1.gz
The whereis command first reports where the source files are: /usr/src/koffice-1.4.1/kword. Then it informs you as to the location of any binary executables: /usr/bin/kword and /usr/bin/X11/kword. KWord is found in two places on this machine, which is a bit unusual but not bizarre. Finally, you find out where the man pages are: /usr/share/man/man1/kword.1.gz. Armed with this information, you can now verify that the program is in fact installed on this computer, and you know now how to run it. If you want to search only for binaries, use the -b option.
$ whereis -b kword
kword: /usr/bin/kword /usr/bin/X11/kword
If you want to search only for man pages, the -m option is your ticket.
$ whereis -m kword
kword: /usr/share/man/man1/kword.1.gz
Finally, if you want to limit your search only to sources, try the -s option.
$ whereis -s kword
kword: /usr/src/koffice-1.4.1/kword
The whereis command is a good, quick way to find vital information about programs on the computer you're using. You'll find yourself using it more than you think.
source :- scott granneman
locates the program. If the search path includes more than one program with the specified name, which displays the name of only the first one (the one you would run).
The whereis utility looks through a list of standard directories and works independently of your
search path. Use whereis to locate a binary (executable) file, any manual pages, and source code
for a program you specify; whereis displays all the files it finds.
caution
Both the which and whereis utilities report only the names for commands as they are found on the disk; they do not report shell builtins (utilities that are built into a shell;
When you use whereis to try to find where the echo command (which exists as both a utility program and a shell builtin) is kept, you get the following result:
$ whereis echo
echo: /bin/echo /usr/share/man/man1/echo.1.gz
The whereis utility does not display the echo builtin. Even the which utility reports the wrong
information:
$ which echo
/bin/echo
Under bash you can use the type builtin (page 445) to determine whether a command is a builtin:
$ type echo
echo is a shell builtin
whereis examples
whereis
The whereis command performs an incredibly useful function: It tells you the paths for a command's executable program, its source files (if they exist), and its man pages. For instance, here's what you might get for KWord, the word processor in the KOffice set of programs (assuming, of course, that the binary, source, and man files are all installed):
$ whereis kword
kword: /usr/src/koffice-1.4.1/kword /usr/bin/kword/usr/bin/X11/kword usr/share/man/man1
/kword.1.gz
The whereis command first reports where the source files are: /usr/src/koffice-1.4.1/kword. Then it informs you as to the location of any binary executables: /usr/bin/kword and /usr/bin/X11/kword. KWord is found in two places on this machine, which is a bit unusual but not bizarre. Finally, you find out where the man pages are: /usr/share/man/man1/kword.1.gz. Armed with this information, you can now verify that the program is in fact installed on this computer, and you know now how to run it. If you want to search only for binaries, use the -b option.
$ whereis -b kword
kword: /usr/bin/kword /usr/bin/X11/kword
If you want to search only for man pages, the -m option is your ticket.
$ whereis -m kword
kword: /usr/share/man/man1/kword.1.gz
Finally, if you want to limit your search only to sources, try the -s option.
$ whereis -s kword
kword: /usr/src/koffice-1.4.1/kword
The whereis command is a good, quick way to find vital information about programs on the computer you're using. You'll find yourself using it more than you think.
source :- scott granneman
Monday, October 6, 2008
cat /etc/nsswitch.conf
controls the order in which host name resolutions are checked
etc/nsswitch.conf file
----------------------
Every Linux computer has this file that determines what exactly should happen when translating a host name to an IP address and vice versa.
This file specifies many things (such as user configuration,), but only the following lines are important for resolving host names:
hosts: files dns
networks: files
These two lines specify that, when resolving host names as well as network names, the (local) files should be searched first, and that the DNS subsystem should be used only if the files have no information about the given host. Thus, an administrator can make sure that frequently accessed host names are resolved locally, where the DNS is contacted only
sulekha
the following line is taken from my /etc/nsswitch.conf file
hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4
can any one explain me what is meant by mdns4_minimal ,[NOTFOUND=return], and mdns4
spd106
It's the recommended settings for Avahi (the mdns daemon).
See this page at the project website for more information.
http://avahi.org/wiki/AvahiAndUnicastDotLocal
---------------------------------------------------------------------------------
etc/nsswitch.conf file
----------------------
Every Linux computer has this file that determines what exactly should happen when translating a host name to an IP address and vice versa.
This file specifies many things (such as user configuration,), but only the following lines are important for resolving host names:
hosts: files dns
networks: files
These two lines specify that, when resolving host names as well as network names, the (local) files should be searched first, and that the DNS subsystem should be used only if the files have no information about the given host. Thus, an administrator can make sure that frequently accessed host names are resolved locally, where the DNS is contacted only
sulekha
the following line is taken from my /etc/nsswitch.conf file
hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4
can any one explain me what is meant by mdns4_minimal ,[NOTFOUND=return], and mdns4
spd106
It's the recommended settings for Avahi (the mdns daemon).
See this page at the project website for more information.
http://avahi.org/wiki/AvahiAndUnicastDotLocal
---------------------------------------------------------------------------------
To turn off a user’s account temporarily
To turn off a user’s account temporarily, you can use usermod to change the expira-
tion date for the account. Because it specifies that his account expired in the past
(December 31, 2007), the following command line prevents Max from logging in:
$ sudo usermod -e "12/31/07" max
See the usermod man page for more information.
tion date for the account. Because it specifies that his account expired in the past
(December 31, 2007), the following command line prevents Max from logging in:
$ sudo usermod -e "12/31/07" max
See the usermod man page for more information.
/var/log/auth.log
Holds messages from security-related programs such as sudo and the sshd daemon.
verified on ubuntu 8.04
verified on ubuntu 8.04
rc-default task and inittab
Under SysVinit, the initdefault entry in the /etc/inittab file tells init which runlevel) to bring the system to when it comes up. Ubuntu does not include an inittab file and, by default, the Upstart init daemon (using the rc-default task) boots the system to multiuser mode (runlevel 2, the default runlevel). If you want the system to boot to a different runlevel, create an inittab file. The following file causes the system to boot to single-user mode (runlevel S):
$ cat /etc/inittab
:id:S:initdefault:
verified on ubuntu 8.04
$ cat /etc/inittab
:id:S:initdefault:
verified on ubuntu 8.04
Friday, October 3, 2008
bash built ins
A builtin is a utility (also called a command) that is built into a shell. Each of the
shells has its own set of builtins. When it runs a builtin, the shell does not fork a
new process. Consequently builtins run more quickly and can affect the environ-
ment of the current shell. Because builtins are used in the same way as utilities, you
will not typically be aware of whether a utility is built into the shell or is a stand-
alone utility.The echo utility is a shell builtin. The shell always executes a shell builtin before trying to find a command or utility with the same name.
Listing bash builtins
To get a complete list of bash builtins, give the command info bash builtin. To display a page with more information on each builtin, move the cursor to one of the lines listing a builtin command and press RETURN. Alternatively, after typing info bash, give the command /builtin to search the bash documentation for the string builtin. The cursor will rest on the word Builtin in a menu; press RETURN to display the builtins menu.
shells has its own set of builtins. When it runs a builtin, the shell does not fork a
new process. Consequently builtins run more quickly and can affect the environ-
ment of the current shell. Because builtins are used in the same way as utilities, you
will not typically be aware of whether a utility is built into the shell or is a stand-
alone utility.The echo utility is a shell builtin. The shell always executes a shell builtin before trying to find a command or utility with the same name.
Listing bash builtins
To get a complete list of bash builtins, give the command info bash builtin. To display a page with more information on each builtin, move the cursor to one of the lines listing a builtin command and press RETURN. Alternatively, after typing info bash, give the command /builtin to search the bash documentation for the string builtin. The cursor will rest on the word Builtin in a menu; press RETURN to display the builtins menu.
PATH and security
security Do not put the working directory first in PATH when security is a concern. If you are working as root, you should never put the working directory first in PATH. It is common for root’s PATH to omit the working directory entirely. You can always execute a file in the working directory by prepending ./ to the name: ./ls.
Putting the working directory first in PATH can create a security hole. Most people type ls as the
first command when entering a directory. If the owner of a directory places an executable file
named ls in the directory, and the working directory appears first in a user’s PATH, the user giving
an ls command from the directory executes the ls program in the working directory instead of the
system ls utility, possibly with undesirable results.
Putting the working directory first in PATH can create a security hole. Most people type ls as the
first command when entering a directory. If the owner of a directory places an executable file
named ls in the directory, and the working directory appears first in a user’s PATH, the user giving
an ls command from the directory executes the ls program in the working directory instead of the
system ls utility, possibly with undesirable results.
man vs info
The info utility displays more complete and up-to-date information on GNU utilities than does man. When a man page displays abbreviated information on a utility that is covered by info,the man page refers to info. The man utility frequently displays the only information available on non-GNU utilities. When info displays information on non-GNU utilities, it is frequently a copy of
the man page.
the man page.
xev utility
You can run xev (X event) by giving the command xev from a terminal emulator window and then watch the information flow from the client to the server and back again. This utility opens the Event Tester window, which has a box in it, and asks the X server to send it events each time anything happens, such as moving the mouse pointer, clicking a mouse button, moving the mouse pointer into the box, typing, or resizing the window. The xev utility displays information about each event in the window you opened it from. You can use xev as an educational tool: Start it and see how much information is processed each time you move the mouse. Close the Event Tester window to exit from xev.
script: Records a Shell Session
user@ubuntu:~$ script
Script started, file is typescript
user@ubuntu:~$ ls -l /dev/sda
brw-rw---- 1 root disk 8, 0 2008-10-03 09:47 /dev/sda
user@ubuntu:~$ ls -l /proc | head -10
total 0
dr-xr-xr-x 6 root root 0 2008-10-03 09:47 1
dr-xr-xr-x 6 root root 0 2008-10-03 09:47 10
dr-xr-xr-x 6 root root 0 2008-10-03 09:47 11
dr-xr-xr-x 6 root root 0 2008-10-03 11:16 11728
dr-xr-xr-x 6 root root 0 2008-10-03 11:16 11729
dr-xr-xr-x 6 root root 0 2008-10-03 11:30 11920
dr-xr-xr-x 6 user user 0 2008-10-03 11:51 12031
dr-xr-xr-x 6 user user 0 2008-10-03 12:07 12103
dr-xr-xr-x 6 user user 0 2008-10-03 12:17 12200
user@ubuntu:~$ exit
exit
Script done, file is typescript
user@ubuntu:~$ cat typescript
Script started on Friday 03 October 2008 12:37:43 PM IST
user@ubuntu:~$ ls -l /dev/sda
brw-rw---- 1 root disk 8, 0 2008-10-03 09:47 /dev/sda
user@ubuntu:~$ ls -l /proc | head -10
total 0
dr-xr-xr-x 6 root root 0 2008-10-03 09:47 1
dr-xr-xr-x 6 root root 0 2008-10-03 09:47 10
dr-xr-xr-x 6 root root 0 2008-10-03 09:47 11
dr-xr-xr-x 6 root root 0 2008-10-03 11:16 11728
dr-xr-xr-x 6 root root 0 2008-10-03 11:16 11729
dr-xr-xr-x 6 root root 0 2008-10-03 11:30 11920
dr-xr-xr-x 6 user user 0 2008-10-03 11:51 12031
dr-xr-xr-x 6 user user 0 2008-10-03 12:07 12103
dr-xr-xr-x 6 user user 0 2008-10-03 12:17 12200
user@ubuntu:~$ exit
exit
Script done on Friday 03 October 2008 12:38:26 PM IST
NB;- tested on ubuntu hardy
Script started, file is typescript
user@ubuntu:~$ ls -l /dev/sda
brw-rw---- 1 root disk 8, 0 2008-10-03 09:47 /dev/sda
user@ubuntu:~$ ls -l /proc | head -10
total 0
dr-xr-xr-x 6 root root 0 2008-10-03 09:47 1
dr-xr-xr-x 6 root root 0 2008-10-03 09:47 10
dr-xr-xr-x 6 root root 0 2008-10-03 09:47 11
dr-xr-xr-x 6 root root 0 2008-10-03 11:16 11728
dr-xr-xr-x 6 root root 0 2008-10-03 11:16 11729
dr-xr-xr-x 6 root root 0 2008-10-03 11:30 11920
dr-xr-xr-x 6 user user 0 2008-10-03 11:51 12031
dr-xr-xr-x 6 user user 0 2008-10-03 12:07 12103
dr-xr-xr-x 6 user user 0 2008-10-03 12:17 12200
user@ubuntu:~$ exit
exit
Script done, file is typescript
user@ubuntu:~$ cat typescript
Script started on Friday 03 October 2008 12:37:43 PM IST
user@ubuntu:~$ ls -l /dev/sda
brw-rw---- 1 root disk 8, 0 2008-10-03 09:47 /dev/sda
user@ubuntu:~$ ls -l /proc | head -10
total 0
dr-xr-xr-x 6 root root 0 2008-10-03 09:47 1
dr-xr-xr-x 6 root root 0 2008-10-03 09:47 10
dr-xr-xr-x 6 root root 0 2008-10-03 09:47 11
dr-xr-xr-x 6 root root 0 2008-10-03 11:16 11728
dr-xr-xr-x 6 root root 0 2008-10-03 11:16 11729
dr-xr-xr-x 6 root root 0 2008-10-03 11:30 11920
dr-xr-xr-x 6 user user 0 2008-10-03 11:51 12031
dr-xr-xr-x 6 user user 0 2008-10-03 12:07 12103
dr-xr-xr-x 6 user user 0 2008-10-03 12:17 12200
user@ubuntu:~$ exit
exit
Script done on Friday 03 October 2008 12:38:26 PM IST
NB;- tested on ubuntu hardy
/var/log
/var/log/lastlog
contains the information about the last user that logged into the system.the information from this log is used by the lastlog utility
messages (system messages from syslogd),
/var/log/wtmp
stores the history of connections to the system.the information from this file is used by the last system utilities.
/var/log/btmp
this file stores information about un successful login attempts.the information from this file is used by the lastb command.
/var/log/daemon.log
all daemon facility messages
/var/log/dpkg.log
package management log
/var/log/dmesg
dump of kernel message buffer
/var/log/faillog
un-successful login attempts
/var/log/kern.log
all kernel facility messages
NB:-
/var/run/utmp
stores information about the current connections to the system.The information from this log file is used by who and w utilities

NB :- tested on ubuntu hardy
contains the information about the last user that logged into the system.the information from this log is used by the lastlog utility
messages (system messages from syslogd),
/var/log/wtmp
stores the history of connections to the system.the information from this file is used by the last system utilities.
/var/log/btmp
this file stores information about un successful login attempts.the information from this file is used by the lastb command.
/var/log/daemon.log
all daemon facility messages
/var/log/dpkg.log
package management log
/var/log/dmesg
dump of kernel message buffer
/var/log/faillog
un-successful login attempts
/var/log/kern.log
all kernel facility messages
NB:-
/var/run/utmp
stores information about the current connections to the system.The information from this log file is used by who and w utilities

NB :- tested on ubuntu hardy
Subscribe to:
Comments (Atom)







