VizagInfo.com Home | VizagInfo Mirror of LG


LINUX GAZETTE

July 2003, Issue 92       Published by Linux Journal

Front Page  |  Back Issues  |  FAQ  |  Mirrors
The Answer Gang knowledge base (your Linux questions here!)
Search (www.linuxgazette.com)


Linux Gazette Staff and The Answer Gang

Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Ben Okopnik, Dan Wilder, Don Marti

TWDT 1 (gzipped text file)
TWDT 2 (HTML file)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.
Linux Gazette[tm], http://www.linuxgazette.com/
This page maintained by the Editor of Linux Gazette, gazette@ssc.com

Copyright © 1996-2003 Specialized Systems Consultants, Inc.

LINUX GAZETTE
...making Linux just a little more fun!
The Mailbag
From The Readers of Linux Gazette


HELP WANTED : Article Ideas
Submit comments about articles, or articles themselves (after reading our guidelines) to The Editors of Linux Gazette, and technical answers and tips about Linux to The Answer Gang.


BiDi Problems in WINE + SMARTDRAW

Thu, 12 Jun 2003 17:49:30 -0300
Daniel Carneiro do Nascimento (dcn from microlink.com.br)

#sorry about my english... i'ved learned that by myself.. so ..
# U can make some modifications < of course.. it's gpl..> in my english
mistakes

So I did, just a little, though usually we leave questions alone so people have a sense of how the querent meant things :) -- Heather

Hiya guys..

I have a problem ( d' aah)

I've tried to use SmartDraw under wine.. and then.. after I configure everything.. It works! At least, I think that, when I see SmarrtDraw starting.. showing the initial WELCOME.. etc.. but.. when he tries to show me the initial screen < to chose the objects of my diagram> BUMMER! My wine DIES.

my log is so big.. and every thing happens about BiDi...

#] warn:font:GetCharacterPlacementW The BiDi algorythm doesn't conform
to Windows'

And then.. BiDi throws a lot of junk < i suppose> in my memory causing some HEAPS Faults:.

#] warn:heap:HEAP_IsRealArena Heap 0x40db0000: block 0x408acf is not
inside heap

there's not an upgrade for BiDi available.. and.. since November 22.. BiDi has been going crazy... with some programs that request some kind of.. font.. i don't know...

The HEAP Faults problem.. I solved myself making a bigger "X:/temp" and includding a new path for junk.. but.. WINE couldn't pass through BiDi, when it get a crash.. cause the BiDi NEVER stops to send some.. THING. < i don't know what either.> to the memory.. that fills up.. whatever is your /temp size! < mine is 2 G!>

I just don't know what to do! I'm really really lost.. and.. I need to make wine work... it's not for the program itself.. it's for the HONOR! AHUuhauahh

DO you guys know ANYTHING about that Suddenly Crashing?!? Or.. incompatibility ? Or whatever you call it... ...

Tnkx so much for reading my crappy email...

PS:. .. HEEEEEELP!

Daniel Carneiro do Nascimento


Squid and FTP

Fri, 27 Jun 2003 11:26:16 +0300
Nickos Yoldassis (niyo from teipat.gr)

Hi there,

I use squid as a proxy server (default configuration) and it seems that i can't connect to ftp sites through it. Do I have to do anything?

Nickos, Greece

It appears that this is an FAQ in the land of Squid, number 12.17 -- "Can I make my regular FTP clients use a Squid cache?"

Nope, its not possible. Squid only accepts HTTP requests. It speaks FTP on the server-side, but not on the client-side.

The very cool wget will download FTP URLs via Squid (and probably any other proxy cache).

However, it would be fun to have an article about somebody using Squid and/or other site caching software in powerful ways to make their site's view of the web more fun. There are a bunch of add-ons at Freshmeat for it, so I'm sure someone out there has a great example for us to follow. Don't forget to read our author submission guidelines. -- Heather


create new lilo boot loader - on 2nd drive

Fri, 13 Jun 2003 02:42:27 +0000
Geraldine Koh (geradin07 from hotmail.com)

Hi people, I have a problem......

I'm actually trying to mirror the hard disks using RAID 1 in Red Hat 9.It can work perfectly but the bug is that i can only boot up the first hard disk, i suppose lilo is stored as th MBR in it. The second hard disk during booting up, shows LI and i boot it using a bootup diskette instead. I'm wondering how to implement lilo in the second HDD in such a way that it auto boots up just like the 1st HDD.Is it possible?

Is it true that only 1 MBR could be used will it work on 2 MBR in 2 respective hard disks?

I visited the Boot+Raid+Root+ Lilo How to documentation: & i tried this method to boot up second HDD..but there's error

it is known as a raid LILO config file pair that I implemented:

See attached geraldine.lilo.conf.hda.txt

I created this 2 lilo configuration file but not too sure whether is eing read anot because i still have a current default lilo file /etc/lilo.conf

See attached geraldine.default.etc-lilo.conf.txt

Bacially that's about all...I hope your gang can resolve my roblem.Sorry if i bored you to sleep with such a long email. Hope to hear from ya soon...

Cheers, Geraldine


GENERAL MAIL


Re: Liunx Gazette in Palm Format

Fri, 30 May 2003 17:47:34 -0400
Ben Okopnik (the LG Answer Gang)
Question by Herbert, James (James.Herbert from ds-s.com)

On Fri, May 30, 2003 at 12:36:02PM -0700, Heather wrote:

[Ben] You can use "bibelot" (available on Freshmeat, IIRC); it's a Perl script that converts plaintext into Palm's PDB format. I have a little script that I use for it:

Does the raw PDB format have a size limit? Our issues can get pretty big sometimes... -- Heather

[Ben] "The Complete Shakespeare" was over 5MB. No trouble at all, except for uploading it ("jpilot" wouldn't do it; neither would the Wind0ws prog on my brother's machine. Kudos to "coldsync".)

Plucker is an open source palm document reader and in my humble opinion THE BEST. There are some really good Linux GUI document converters available for it.

I checked out site-scooper but unfortunately they are very out of date, I'll have to look at installing the scripts on my own box.

The issue I have when converting the site manually is that as the site refences links external to the main document I get duplicate copies of the articles in one document hence an extremely large file (Issue 91 is 1.98MB !!)

Anyway thanks very much for your help, I was quite surprised to get a
response for such a trivial question --- thanks again

James

Glad we could help, though I'm disappointed to hear sitescooper isn't keeping up to date. -- Heather


Article Ideas - Semaphores

Tue, 3 Jun 2003 08:34:46 -0700
rwillis (rwillis from ctf.com)

I have done some searching on the internet for semaphores and have found very little info, and no tutorials. I think that you could use this as a topic to suppliment your article on Message Queues in Issue-89 ( "Exploring Message Queues, Part I" , Raghu J Menon).

Suggested Sections

1 SystemV Semaphores (semget, semop semctl)

2 POSIX 1003.1b Semaphores (sem_init, sem_wait, sem_trywait, sem_post, sem_get_value, sem_destroy)

I have heard mention of something called pthread semaphores, but I am unsure as to what these are, or how to use them.

BTW, SystemV semaphores use key_id (int) which must be unique. ftok() can be used to hash a key from a filepath and a project id, but there must be other ways to generate keys...

It would be really nice to see examples of this in action, as that is one thing that I could not find (exclusively for Linux that is).

Great Magazine!

Thanks,

Richard Willis, B.Eng (EIT)


GAZETTE MATTERS


The things we have to go through to get our articles

Fri, 30 May 2003 17:42:52 -0400
Heather Stern (Linux Gazette Technical Editor)

Ooh, ooh. . . . I used to ... at a former job, and hereby volunteer to write an article on setting up an equivalent. I may need some shouting and/or threats of physical violence to overcome my procrastination though.

Black helicopter request has been filed. It'll be right over as soon as our local operative-in-dark-glasses can fix the autopilot. Of course, if you finish the article before liftoff, do let us know, and we'll send over one of Wooner's beautiful dame clients to pick up the package...

Will do. Um... over and out (?)

Heh. One beautiful dame, coming up next article. Watch for long legs, slinky dresses, and languorous questions about whistling ability. -- Ben
And if you're the sort of person who can fry a good article up sometime this summer -- to make Linux a little more fun for folks who get dizzy when they need to know what sorts of barbecue briquettes are used for firewalls around here - do let us know. We're planning our editorial schedule to layout how August and September will be released, and having some articles in ahead of time would be really, really handy. Now I can't guarantee a personal pick-up by ultra modern black helicopter with an absolutely gorgeous - shall we say bombshell? - dame flying it, but we can ask! :) -- Heather

This page edited and maintained by the Editors of Linux Gazette
HTML script maintained by Heather Stern of Starshine Technical Services, http://www.starshine.org/
Copyright © 2003
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 92 of Linux Gazette, July 2003

LINUX GAZETTE
...making Linux just a little more fun!
More 2¢ Tips!
By The Readers of Linux Gazette

See also: The Answer Gang's Knowledge Base and the LG Search Engine


Backup Software: Robustness

Mon, 2 Jun 2003 08:09:29 +1000
Nick Coleman (njpc from ozemail.com.au)

This is a reply to a letter to the Mailbag in the June 2003 issue of Linux Gazette, compressed tape backups

quite a while back I remember a discussion on compressed tar archives on tape and the security risk, i.e. the data would be unrecoverable behind the first damaged bit.

Now at that time I knew that bzip2, unlike gzip, is internally a blocking algorithm and it should be possible to recover all undamaged blocks after the damaged one.

Your correspondent may like to look into afio instead of tar for backups. I believe it recovers from errors much better. The mondo rescue tool developer uses it.

Regards,
Nick Coleman

[JimD] The problems recovering tar files are worst with GNU tar operating on gzip'd archives. star (by Joerg Schily, of cdrecord and mkisofs fame) cpio, and pax are all better at resynchronizing to the archive headers past a point of file corruption than GNU tar.
afio might very well be better that cpio. I don't know, I neither run my own tests nor perused the code.
In general I'd suggest that added redundancy (both through ECC -- error correction coding -- and additional separate copies) is the better way to make one's backups more robust.
I've heard that BRU (backup/recovery utility: http://www.tolisgroup.com a commercial product) adds ECC and checksum data to the archive stream as it performs backups --- and defaults to verifying the archive integrity in a second pass over the data. With cpio, afio, tar, star, dump/restore and pax you have to write your own scripts to perform the verification pass. (cpio and presumably afio do add checksums, GNU tar doesn't, I don't know about the others). So far as I know none of the common free tools adds additional ECC redundancy to their archives.
There is an obscure little utility called 'ras' (redundancy archive system) which can be used to create a set of ECC (sum) files to go with set of base files and allow one to recover from the loss of a subset of base files. This is essentially a utility to manually (and crudely) perform the same sort of redundancy operations as a RAID5 subsystem.
http://www.icewalkers.com/Linux/Software/52890/ras.html
However, I should warn that I haven't used this at all much less tried to integrate it into any sane backup/recovery scripts!
So far the best free backup tool for Linux still seems to be AMANDA (http://www.amanda.org ) though Bacula (http://www.bacula.org ) seems to have a similar and impressive feature set.
AMANDA still uses native dump and/or GNU tar to actually perform the backup. It initiates those processes on each client, aggregates their archives on a central server and manages the process of writing them out to tapes (optionally using a tape changer).
Thus, AMANDA is tape centric and still has the inherent risks of the underlying archiver (vendor's dump --- dumpe2fs for Linux, or GNU tar).
I think it would be neat if AMANDA or Bacula were integrated with ras or some redundancy library in some meaningful way.
There is an overview of these and other free backup packages for UNIX (and Linux) at:
http://www.backupcentral.com/free-backup-software2.html
Ultimately you'd want to keep multiple generations of data backups even if you knew that you had perfect ECC, redundancy, media, and drives. You need this for the same reason you need backups regardless of how sophisticated and redundant your RAID array is configured. Because you may find that your software or your users corrupt your data, and you may need to back off to earlier, known good versions of the data, possibly days, weeks, even month after those backups were made.
(Some forms of corruption can be subtle and insidious).


can I have Linux on a ThinkPad G40? with WinXP?

Thu, 05 Jun 2003 18:35:32 PST
borejsza (borejsza from ucla.edu)

Hi,

I am about to buy a laptop and am looking for advice as to its compatibility with Linux.

I know little about computers (last time I owned one it was a Commodore 64), and less about Linux, but saw a friend use it, and would like to learn how to myself, and gradually move away from Windows. The laptop I am thinking of buying is an IBM ThinkPad G40 (http://www-132.ibm.com/webapp/wcs/stores/servlet/ProductDisplay?productId=8600909&storeId=1&langId=-1&categoryId=2580117&dualCurrId=73&catalogId=-840). I think it is a new model, and could not find it anywhere on the pages that list hardware that has been already tried out with Linux.

Can anybody confirm that I can partition that laptop between Linux and WindowsXP before I blow all my savings on it?

Thanks,
Alex

You could buy one preloaded from EmperorLinux: (http://www.emperorlinux.com/auk.html) -- Ben
Or they'll preload a dual boot, or can customize. (So this tip is good for more than that one model.) -- Heather
As far as I'm concerned, IBM-made hardware today should be a sure bet for Linux anyway: they've really thrown themselves behind Linux in a big way, and I'd be surprised to hear of a laptop they make that can't run it. Come to think of it, given the range of hardware that Linux supports these days, making a 'top that can't run Linux would be quite a trick in the first place. -- Ben
[jra] Now, that's not to say that you can easily dual-boot XP. There may be reinstallation issues, and licensing; I don't know that Partition-* or FIPS can safely resize whatever you have loaded without breaking it, and you may not have "install" media for XP -- only "recover" media, which will not let you install on a resized partition.
Missing install media for WinXP isn't relevant to its ability to coexist with Linux, but personally, if my vendor "forgot" to include the Other OS that I had paid for - I'd demand my real discs, or that they discount the box the price of their OS. Given the number of people competing for your business in this venue, I have precious little tolerance for that kind of ripoff. -- Ben
[jra] I would google for "linux win xp dual boot howto", and see what I got. -- jra
[Kapil] Apparently, the trick is to: (1) Install Linux and resize the NTFS partition (2) Boot the recovery CD for XP (3) Interrupt (count 5 :-)) the reinstallation process and run "OS.bat". It seems XP will then "just install" on the resized partition.
This worked with the laptops bought for our Institute. YMMV.
-- Kapil.


FTP Daemons (Servers) and Alternatives: Just Say No?

Tue, 3 Jun 2003 06:03:09 -0700
Jim Dennis (the LG Answer Guy)
Question by Dinos Kouroushaklis on the BLT-newuser list (Blt-newuser from basiclinux.net)

Dear list members,

I would like to hear your suggestions for an ftp server.

I would like to replace an existing win2k ftp server with a Linux based one. What I am interested in is reliability and ease of management. The machine should need only one (maybe more) ethernet card to provide the ftp service (except during installation time). The two ethernet cards can be use one for management and one for the traffic.

The machine will be an Intel Celeron 400 Mhz with 160 (128+32) and 20 GB hard disk with a public (static) IP address in the DMZ.

Regards

Just to be contrarian I have to suggest that you seriously consider abandoning FTP entirely. HTTP is adequate for simple, lightweight anonymous distribution of files (text or binary). scp, sftp (SSH) and rsync over ssh are inherently more secure than plain FTP can ever be. Your MS-Windows users can get Putty (and pscp, et al.) for free.

(Plain, standard FTP will, by dint of the standards, always pass user name and password information "in the clear" across the Internet --- those exposing these valuable, private tokens to "sniffers"). For some purposes BitTorrent can be far more efficient (for widespread, peer assisted distribution of files to many concurrent clients, for example).

SSH, scp, and sftp:

http://www.openssh.org

Putty:

http://www.chiark.greenend.org.uk/~sgtatham/putty

rsync:

http://www.samba.org/rsync

BitTorrent:
http://bitconjurer.org/BitTorrent

If you can, just eliminate FTP and direct your users and customers to better alternatives.

In general the problem with FTP servers is that they run as root (at least during the authentication phase, if they support anything other than anonymous FTP). So FTP daemons have classically been a source of vulnerability (as bad as DNS -- BIND/named --- and MTA -- sendmail --- daemons).

With that in mind, vsftpd would probably be my first free choice. (http://vsftpd.beasts.org )

ProFTPd is popular, and has configuration file syntax that's a vaguely similar to Apache/HTML/SGML (I'll leave it for others to judge that a feature or bug). However, ProFTPd is complex and has had too many security alerts posted against it for my tastes. (http://www.proftpd.org ).

WU-FTPD (for years the default that shipped with most Linux distributions) has the worst security track record in the field. I wouldn't recommend it, I don't care how many bugs they've patched. There comes a time to abandon the codebase and start from scratch. There also comes a time when "brand recognition" (the project's name) shifts from notoriety to notorious infamy.

By contrast, Chris Evans coded vsftpd specifically to be as secure as possible. He discussed the design and every pre-release of the code extensively on the Linux security auditing mailing list (and in other fora devoted to secure programming and coding topics).

If you're willing to go with a commercial/shareware package (that's not free) I'd suggest that Mike Gleason's ncftpd has been around longer than vsftpd and still has a very good track record. (http://www.ncftpd.com ). Registration is only $200 (U.S.) per server for unlimited concurrent connections ($100 for up to 50 concurrent users) and is free for use in educational domains.

If there are no objections I'd like to cross-post this to the Linux Gazette for publication (names of querents will be sanitized) since the question comes up periodically and I like to refresh this answer and the URLs.

All of this assumes that you have no special needs of your FTP server. If you need special features (directory trees restricted by user/group info, pluggable authentication support, virtual domain support, etc) then you'll have to review these products more carefully. However, each of them offers at least some virtual domain/server functionality and a mixture of other features.

[Dan] For a comprehensive annotated list, see: http://linuxmafia.com/pub/linux/security/ftp-daemons
Everybody's got their favorite, and mine's PURE-ftpd, of which Rick Moen of Linuxmafia says on the above page:
Seems like a winner.
http://sourceforge.net/projects/pureftpd


Pause after running xterm

Fri, 30 May 2003 20:39:56 -0400
Ben Okopnik (the LG Answer Gang)
Okay, so it's a nickel's worth. So there. -- Heather

Here's a little problem you might run into: you want to run a certain program - say, as a Mozilla "Helper application" - which needs to run in an xterm. So, you set it up like so:

xterm -e myprogram -my -options

The only problem is, when it comes time to run it, all you see is a flash as the xterm appears, then immediately disappears. What happened? What error did it print out? Why (this does happen at times) does it work when you launch it 'manually' but not from Mozilla?...

Here's an easy and useful solution that will require you to hit a key in order to exit the xterm after the program has finished running. Note that it may fail on tricky command lines (subshell invocations, evals, and other shell-specific gadgetry) but should work fine with normal commands and their options.

See attached okopnik.hold.bash.txt

Invoke it like so:

xterm -e hold myprogram -my -options
[jra] Were you actually planning to answer those question, Prof?
Or are they left as an exercise for the students? :-)
[Ben] The answer is implicit in the solution provided, and will depend on the specific program being launched. The implementation, as always, is left to the student. Giddyap, dammit. :)
[JimD]
	xterm -e /bin/sh 'myprogram -my -options; read x'
... in other words, have a shell execute your program, then read a dummy value from the xterm (the xterm process' console/terminal/stdin)
The command will run, output will be displayed, you'll get a pause where you can type anything you like (also allowing you to scroll through the xterm's buffer). When you hit [Enter] the xterm goes away.
Seems pretty transparent to me. More verbose:
	xterm -e /bin/sh 'myprogram -my -opts; echo "[Enter] when done: ";read x'
More elegant, create a two line script:

See attached jimd.pauseadter.sh.txt

(I'm not really sure we need the eval, but I don't think it'll hurt in any case).
Now simply:
	xterm -e pauseafter.sh myprogram -my -opts
(/me shudders at the electrons that got excited by this blatantly obvious suggestion).


Tips on PDF conversion

Thu, 12 Jun 2003 12:12:55 +0100 (BST)
Mike Martin (the LG Answer Gang)

Has anyone any ideas on converting PDF's to decent text.

To explain

I have a document which has been scanned in, with the only accurate conversion being to pdf (no images)

So I have used pdf2ps which gives me ps file.

However then when I use psto... anything text like, the output is exactly ^L

Any ideas/tips?

[Thomas] If you could convert the pdf to ps and then to LateX then you won't have a problem since tex -> ascii is not a problem. However, going from ps to ascii might require some more thought.
I know that there is a utility called "a2ps" which takes ascii and converts it to a ps file, however I cannot see a converse one program.
I am sure that there is a perl module (hey, Ben!) that could be used to write a perl-script for such a task, however, I am going to suggest you try the following......(I haven't tested this):
strings ./the_ps_file.ps | col -b > ~/new_text_file.txt
I am shunting this through "col" since you describe having lots of "^L" characters. You might have to edit the file by hand as well, since I am sure that a lot of useless information is being processed.
[Ben] See the "pstotext" utility for that.
[Andreas] There's a utility called pdftotext, it is in the xpdf Package, see the xpdf-Homepage http://www.foolabs.com/xpdf
Hopefully an OCR has been performed on your scanned document before it was converted to pdf, otherwise the pdf file would just contain an image and could not directly be converted to text.

Unfortunately, and very annoyingly this is what seems to have happened, seriously aggravating software - it lies.

Off to to see if I can work out how to convert the image to text (its only tables)

[Ben] Well, if it's a picture, "pstotext" won't help. Oh, and don't bother with "strings" on a .ps file: it's all text.
[Robos] Hmm, I ran into some ocr discussion lately and found this: gocr and claraorc (http://www.claraocr.org). The latter one seems to be more evolved...


quotas on directories?

Tue, 3 Jun 2003 19:55:26 +0200
Emmanuel Damons (emmanuel.damons from enterpriseig.com)
Answered By Thomas Adma, Jim Dennis, Kapil Hari Paranjape

Hi

Can you help me I need to specify the size that a folder can grow. almost like the quotas for folder and not users

Thanks

[K.-H.] spontaneous idea, especially if this is for one folder only:
create a partiton of exactly right size and mount it at mountpoint "folder". If creating a partition is not possible use a file and mount it a loop device.
[JimD] In the same concept you could use regular files with the loop mount option to create "partitions" of this sort.
Example:
		 dd if=/dev/zero of=/mnt/images/$FOLDERNAME bs=1024 count=$SIZE
		 mkfs -F /mntimages/$FOLDERNAME
		 mount -o loop /mntimages/$FOLDERNAME $TARGET
Where:
	FOLDERNAME is an arbitrary filename used as a "loopback image"
		(the container that the loop block device driver will treat
		as if it were a partition)
	SIZE is the desired size in kilobytes
	TARGET is the desired location of the "folder" (the mountpoint for
	   this filesystem).
You can use any of the Linux supported filesystem types (ext2, ext3, minix, XFS, JFS, ReiserFS) and you can tune various options (like the amount of reserved space on such "folders" and which UID/GID (user or group) that space is reserved for. You should be able to use quotas, ACLs and EAs (extended attributes and access control lists) (assuming you've patched your kernel for ACL/EA use and enabled it) etc.
Obviously this approach as a couple of downsides. You need intervention by root (or some sudo or SUID helpers) to create and use these images.
[Kapil] Of course, you can use User-mode-linux to create and use these images.
[JimD] Also Linux can only support a limited number of concurrent loop mounts (8 by default). Newer kernels allow this as a module parameter (max_loop=<1-255> ... so up to 255 such folders maximum on the system). This limits the number that could be in concurrent use (though an unlimited number of these "folders" could be stored on the system, mounted and unmounted as needed).
There might be other disadvantages in performance and overhead (I'm not sure).
[Kapil] That would be a downside with UML if you use the file systems with UML.
[JimD] On the plus side you could have any of these encrypted, if you're running a kernel that's had the "International crypto" patch applied to it; and you pass the appropriate additional options to the mount command(s). We won't address the key management issues inherent in this approach; suffice it to say that almost forces us to make mounting these filesystems an interactive process.
If you wanted to have a large number of these, but didn't need them all concurrently mounted you might be able to configure autofs or amd (automounters) to dynamically mount them up and umount them as the target directories were accessed --- possibly by people logging in and out.
There are probably better ways, but this seems to be the most obvious and easiest under Linux using existing tools.
[Kapil] One solution (rather complicated I admit) is to switch over to the Hurd which allows such things and more complicated things as well.
Another is to use "lufs" or other "Usermode filesystems". These put hooks in the kernel VFS that allow one to set up a "user mode" program to provide the "view" of the part of VFS that lies below a particular directory entry.
[JimD] The very notion of limiting the size of a "directory tree" (folder) is ambiguous and moot given the design of UNIX. Files don't exist "under" directories in UNIX. Files are bound to inodes which are on filesystems. Filenames are links to inodes. However every inode can have many links (names). Thus there's an inherent abiguity of what it means to take up space "in a folder" (or "under a directory"). You could traverse the directory tree adding up all files (and the sizes of all directories) thereunder (du -s). This works fine for all inodes with a link count of one, and for cases where all of the inodes are within the scope of the tree (and assuming there are no mount points thereunder). However, it's ambiguous in the general case and begs the question: just what are you trying to accomplish.
[Kapil] Excellent explanation Jim.


What is Reverse DNS?

Mon, 2 Jun 2003 20:37:46 EDT
(jimd from mars.starshine.org)
Question by TEEML914 (TEEML914 from aol.com)

I'm doing an assigment. Can you tell me in laymans terms what reverse DNS is?

[Faber] Yes, we can.

Thank you and have a great day

[Faber] You're welcome and have a spiffy night yourself..
[JimD] Faber, I think your cheerful sarcasm might be lost on him. After, he's dense enought to take such a simple question (from his homework assigment, no less) and go to all the trouble it of asking us
Yes, we can tell you. We can answer such questions. With dilligent work (as in DOING YOUR OWN HOMEWORK) you'd be able to answer questions like that, too.
For everyone else who hears this buzz phrase and wonders about it (people who aren't trying to skate through classes so they can make complete idiots of themselves when they enter a job market thoroughly unprepared by the schooling they shirked):

...............

"reverse DNS" is the process of asking the DNS (domain name system) for the name associated with a given IP address (which, of course, is numeric). Since DNS is primarily used to resolve (look up) an address given a name; this numeric to symbolic lookup is the converse operation. However, the term "converse" is somewhat obscure so the more literate and erudite among us are stuck with the phrase: "reverse DNS."
On a technical level, a reverse DNS query is a question for a PTR record in the in-addr.arpa domain. For historical reasons the in-addr (inverse address) subdomain of the "Advanced Research Projects Administration" (the forebear of the Internet) is reserved for this purpose. For technical reasons the four components of a traditional "dotted quad decimal" representation of the address are arranged in reverse order: least significant octet first. This allows the most significant octets to be treated as "subdomains" of the in-addr.arpa domain which allows delegation (a DNS mechanism for administrative and routing/distribution purposes) to be down on octet boundaries.
Of course any good book on DNS will provide all of the gory details, or one could simply read the RFCs (request for comments documents) which are the normal mechanism by which standards are proposed to the IETF (Internet Engineering Task Force) which marshalls them through a review and vetting process, publishes them and recommends their adoption. (Since the Internet is still basically anarchial the adoption of new standards is essentially a ratification process --- each Internet site "votes with its feet" as it were).
In particular it looks like you'd want to read RFC3172:
http://www.faqs.org/rfcs/rfc3172.html

...............

Please have your instructor send my extra credit points c/o Linux Gazette and be sure to have him give you a failing grade in your TCP/IP or Internet/Networking Fundamentals class.
(In the unlikely event the assignment was to explore the use of sarcasm by curmudgeons in the Linux community --- then bravo!)


Subscribe to groups...........pan,Knode.......????

Wed, 25 Jun 2003 20:21:12 +0530
Vivek Ravindranath (vivek_ravindranath from softhome.net)
Answered By Dan Wilder, Karl-Heinz Herrmann, Anita Lewis, Ben Okopnik, Jason Creighton, Heather Stern

Hi Answer Gang,

Can please tell me how to subscribe to linux groups

[Dan] You might start by pointing your browser (konqueror, mozilla, lynx, w3m, netscape, and so on) at:
http://www.tldp.org
and browse what's there. Then look at
http://www.linuxjournal.com
http://www.linuxgazette.com
http://www.lwn.com
http://www.linuxtoday.com
http://www.slashdot.com
Then you might come back and explain in somewhat more specific terms what you're trying to do. There are lots of Linux websites, including documentation, news, online discussions; to get to any of those, you just click on links.
For e-mail discussion groups you mostly have to subscribe. How you do that depends on what group you're interested in. Once you're subscribed, any email you send to some submission address is duplicated and sent to all subscribers.
Many discussion groups have their archives open. For example, point your browser at
http://www.ssc.com/mailing-lists
for an overview of mailing lists hosted by SSC, publishers of Linux Journal.
From that page you can click on list information pages and get to list
archives by following the links. The list information pages also let you apply for membership in the lists. Normally you'll get a confirming email back, and your list membership goes into effect when the list management software receives your reply.

such yahoo groups ,

[Jason] Well, "Yahoo groups" are just email lists, so you can subscribe to them and read them offline. Same deal for any mailing list.

google groups .......

[Jason] Now for newsgroups (What you call "google groups". Google groups is actually a web interface on top of usenet.) I use leafnode (Sorry, don't have the URL, but a google for "leafnode usenet server" would probaby turn up the homepage.) for this. It's an easy to configure (IMHO) usenet server that only downloads messages in groups that you read.

and download all messages for offline viewing using pan or knode or any other software (Please mention the name of the software and URL).I wan't to view the messages offline.

First of all I dont know whether it is possible.Can you suggest any other methods to do so? By groups I mean any linux group, please suggest any good linux groups if possible...and please give the address that is to be entered in the address field of the viewer and other details.I just want to get regular information regarding linux........thanks in advance.

Vivek.

[K.-H.] for the offline reading: I'm not quite sure what "linux group" you are talking about. If you want to have a look at linux websites as suggested wwoffle is very useful for caching of webpages so you can view them at leasure offline. Any new link you click on will be remembered and fetched next time online. If you talk about news groups (usenet) like: comp.os.linux.* I am using [x]emacs newsreader "gnus" which has a offline feature called "agent". You can read the info pages to this but if this is your first contact with news and [x]emacs then I can not recommend this wholeheartedly -- gnus itself is rather complex and therefor powerful (or is it the other way round?). Agent is an additional layer of complexity which takes time to get used to.
pan I don't know,
It's a newsreader, whose name might offend a family publication, but which is nonetheless supposed to be very nifty. -- Heather
knode I can only guess is the kde version of a newsreader. If they support offline features I've no idea. There are other newsreaders: nn, tin, ... but as far as I know all miss the offline feature. netscape has a newsreader with rather limited offline capabilities but for a first try that might be sufficient.
[Anita] Do you mean that you would subscribe to a mailing list on yahoogroups and then go there and download their archives? That is something I would like to know how to do too, because we had a list there and changed to our own server. I'd like to be able to get those old messages. Well, in truth, I would have liked to have had them, but now I think they are too obsolete. Still, I wouldn't mind having them, especially if I could get them into mbox format.
[Faber] <musing out loud>Couldn't you use something like wget in a Perl script to download the archives by links? Ben could probably write a one-liner to do it. In his sleep. :-) </musing>
[Ben] Actually, it would take some tricky negotiation, Web page downloading and parsing, etc. - it's a non-trivial task if you wanted to do it from scratch. "Yosucker" from Freshmeat is a good example of how to download their Web-only mail; it wouldn't be too hard to tweak for the above purpose (it's written in Perl.)
[Jason] You could probably just use wget, with some combination of -I and -r. The thing is a HTTP/FTP shotgun.
[Ben] Nope. Remember that you need to log in to Yahoo before you can read the stuff; after that, you get to click on the message links (20 per page or so) to read them. If it was that easy, they wouldn't be able to charge you for the "improved" access (which includes POP access to your mail and a bunch of other goodies.)
[Jason] Actually, I was thinking of download from an online mailing list archive, not logging into Yahoo.
Perhaps a little specific encoding with lynx' ability to pick up its transmission data from stdin ... -get_data. It's your login, so you'll need to guard your password in that packet from prying eyes. Like Ben says, tricky, but certainly it can be done. -- Heather


Confused about symantics of "mount -o,async/sync" commands

Thu, 12 Jun 2003 21:30:21 -0700
Bombardier System Consulting (bombardiersysco from qwest.net)
Answered By Karl-Heinz Herrmann, Thomas Adam, Ben Okopnik, Jim Dennis, Jay R. Ashworth

Hello,

I am taking a local Linux certification class and seem to have offended my instructor by questioning the semantics of the "sync" and "async" options in the mount command. They seem backward to me and I don't understand what I am missing.

The following are the definitions that I found online and understand for the words:

Synchronous (pronounced SIHN-kro-nuhs, from Greek syn-, meaning "with," and chronos, meaning "time") is an adjective describing objects or events that are coordinated in time. (within the context of system activities I associate synchronous with either being timing based or requiring an acknowledgement)

Asynchronous (pronounced ay-SIHN- kro-nuhs, from Greek asyn-, meaning "not with," and chronos, meaning "time") is an adjective describing objects or events that are not coordinated in time. (within the context of system activities I associate asynchronous with being event/interrupt driven).

It has been my experience and is my understanding with disk caching that data that is released to the system to be written to disk is kept for a specific time or until the cache is full before being written to disk. Hence synchronous. It is my experience and is my understanding that data from an application which is released to the system and is directly written through to disk is done so in an asynchronous or event driven manner.

[K.-H.] synchronous -- applications intent to write data and actual write are at the same time
asynchronous -- applications intent to write and actual write are not at the same time as system decides when to write the cached data
[Thomas] These options are really useful in /etc/export if you ever need to mount directories over NFS, too. Although just don't specify them at the same time as each other!
[Ben] Yup. The latter is more efficient, since it allows the hevy lifting to occur all at once (one way to look at it is that the "startup" and the "wind-down" costs of multiple disk writes are eliminated - you "pay" only once), but is a little less secure in the sense of data safety - if your computer is, say, accidentally powered down while there's data in the cache, that data evaporates, even though you "know" that you saved it.

This is evidently opposite of the way that the terms are understood and used in Linux. Please help me understand.

Thanks,

Jim Bombardier

Put simply, ... you're wrong.
"sync" in the Linux parlance (and in other disk buffering/caching contexts with which I'm familiar) means that the writes to that filesystem are "synchronized" out to the disk before the writing process is scheduled for any more time slices. In other words, upon return from a write() system call the write as occurred to the hardware device.
This usage is consistent with the traditional meaning of the 'sync' utility (part of all versions of UNIX I've used and heard of). The 'sync' utility forces the kernel to "synchronize" its buffers/caches out to the device.
"async" means that writes are happening asynchronously to the ongoing events in the process. In other words mere return from the function call doesn't indicate that the data is safely flushed to the device.
Note that use of sync is strongly discouraged by kernel luminaries (Linus Torvalds in particular). I sometimes choose to over-ride their better judgement myself --- but I do so only with considerable mulling on the tradeoffs. In general you're better off with UPS (uninterruptable power supply) and a journaling filesystem than you'll ever be by trying to force synchronous writes for an entire filesystem.
Of course, with open source packages you can opt for aggressive explicit synchonization of selected file descriptors using the fsync() function. Note that this can lead to poor overall system performance in some cases. For example MTAs (mail transport agents) and syslogd both make extensive use of fsync(). If they share the same filesystem (/var/log and /var/spool are on a single volume) it can make the entire system feel sluggish under only moderate mail handling load (as each mail delivery logs several messages and each of those processes runs its on fsync() calls.
[jra] You know, the way I've always interpreted this is that it describes the coupling between the application program's logical view of the disk contents and the actual physical, magnetic contents of the drive, across time:
those views are either mandated to stay in "sync" -- no buffering; if the OS says it's written, it is on the platters, or they're "async" -- the OS is permitted to "cheat" a little bit in between when it tells the app "it's written" and when it actually happens.
I guess it's really just another way of phrasing the same thing...