Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Ben Okopnik, Dan Wilder, Don Marti
...making Linux just a little more fun!
From The Readers of Linux Gazette
I have tried lots and lots and lots of things to get DNS setup properly, but can only get it to work intermitently. What I am try to achieve is a fairly small, fairly simple, multi domain DNS host for a few domains that my company owns, and then a few that we are/will be hosting in the near(ish) future. I have some linux boxes RH62/RH7x configured for various tasks like email, database servers, apache, samba etc. Some of the DNS authorities are with register.com, some with various ISP's. I can modify the register.com ones once I have the thing working, and will ask the ISP's to "hand over" the SOA's as well, but first I have to get the confounded thing to work properly.
I shall describe what I would like, using thumbsuck names and ip's, and would be VERY happy and appreciative if you could tell me how the config files should look.
lets say I own/have howdoesdnswork.com as my main domain, and host some others like:
whywontthiswork.net imconfused.co.za plshelp.org.za
I plan to run (i think i have it working, but can't test properly till DNS works) a virtual domain mailhost (qmail based) system. I have a fixed IP/permanent connection for my main(own) domain which is on ip subnet 126.96.36.199
DNS server (primary) is/will_be 188.8.131.52 (using BIND8)
DNS server (secondary) is/will_be 184.108.40.206 (also using BIND8)
Mail server for howdoesdnswork.com (and in fact all above) is 220.127.116.11
Web server (for all above using apache virtualhosting) is on 18.104.22.168
You're probably thinking "Why is someone so clueless even attempting something like this", but I've gotta start somewhere if I'm ever gonna learn. Pls pardon my ignorance (& I'll pardon your sniggering ).
By what I have read, and tried and struggled with, I need zone files for each domain, each of which contains host info etc. Here are my attempts, comments etc still included ..... followed by a desperate request (on my knees, tears running down my face etc) for assistance/guidance/criticism.
See attached Murgatroyd.dns-configuration-files.txt
I would also like to get some info on "mail server splitting" - as in having a local mail server (proxy) on a DSL connected LAN which forward internet emails to a main server (mailhost) on a permanent connection, but transmits local mail as local mail, and then which downloads mail from the "mailhost" to the local "mail proxy" on a polled interval, but I'm probably pushing my luck here, so I'll post this one another time..... unless of course...??? I have a working system using micro%$#* but would like to get rid of ALL M$ products as soon as humanly possible.
Thanx a stack Trent
For DNS questions there's a great resource called "Ask Mr.DNS" - but he won't answer generic requests, they'd have to be reachable from the net. Still, his archives are catalogued by category at http://www.acmebw.com/cats.htm
This certainly looks like a solvable problem; if your patience wears too thin with Micro[snip] and your time too short, you might want to dig into the Consultants Howto. While by title you'd think it was "Howto become a consultant" it's really "Howto find a consultant". There's lots of 'em, but you may recognize a few names in those pages... Check out http://www.tldp.org -- The Scissors
I hope this is a legitimate question: At faked.org a person subscribes to Info@fake.nl with his e-mailadress email@example.com Okay, fetchmail collects the e-mails and sendmail distributes them on a linuxserver. What is weird: the sender sends to himself . . . in pine one sees: From: Info@fake.nl To: Info@fake.nl
Now I would not deliver such an e-mail . . . . But linux is politer and sends it to the local person of last resort.
I cannot but forward the e-mail to firstname.lastname@example.org I do not want to mess myself with the sendmail and fetchmail configurations as they do perfectly what they should do with normal e-mails.
For your information I have put here-under what Outlook Express shows One blames me for not delivering the e-mail in the normal way . . . (Help !)
In outlook express one sees:
See attached Chris-de-Boer.headers-oe.txt
This is a multi-part message in MIME format. I had extra fun snipping the bulky HTML attachment into shreds small enough to wheelbarrow off in a tilted over greater-than symbol. Clipping the equals marks off the line ends was gravy. -- The Scissors
Excellent material on this site !!!
I have a small problem. I have two subnets: 22.214.171.124/24 and 192.168.52.0/24. I want all the hosts on one to see the others on the other subnet. In other words, I want to have NO BLOCKING of any service from either side.
I have been able to make the 168 network hosts see and access the 149 hosts. From the 149 subnet I can ping a host on the other subnet, but I can't, for example, see a PC's shared directories. Security is not an issue as they are both internal networks.
I am running Coyote Linux on a floppy.
Any help will be greatly appreciated and thanks in advance.
I had simular problem, the other way around... The BIOS have a function for ps2 keyboard and mouse power-up, check that it is set for your needs... The problem is that it doesn't work with all os-shutdowns Don't know how that come...
Ruben Hansen alias GbyTe
I assume you mean for APM features in the BIOS. I wonder if perhaps, this is BIOS specific somehow. I know that almost all BIOS's are "standard" in terms of options, but you never know......
I got a lot of useful answers from this group for my previous question [TAG] RPM -Installing Packages.Thanks a lot to Thomas ,Ben,Breen,David and Rick.
Actually i'm trying (to simulate the RPM Tool functionalities using C ) to develop a Java Packager Tool for handling *. rpm files in a Solaris Box. It should support packaging operations like Install , Query, Verify,Erase etc.
As a first task , I'm tring to simulate the RPM Query option , to query installed packages.If we query an uninstalled package ,it should say it's uninstalled.It should generate package infos. as we get in : rpm -qi <file>. Also to query the listof files , pre & post installation scripts , list of dependencies, list of dependencies covered and not covered.I don't know how to use <rpmdb.h> file for generating these informations.
Hmmm, from my limited knowledge I think that you can just query the RPM database directly without going via <rpmdb.h>
Can anybody (in this group) help me in this regard?
Which part are you implementing in Java (I got 93/100 for that last semester ???
Any other suggestions that would help me to procede in this project?
Sounds like an interesting project. Readers, if any of you have done the same, or seen a project with this in progress, let us know! -- The Scissors
I know that this list is called "linux-questions-only" and therefore I have to prove that it is indeed a linux question! So: I want to travel to the USA with my linux laptop and need some reliable internet access via modem or ISDN (is ISDN available there and which standard is used then).
In Germany we have something called "Internet by call" where you call a number and pay via telephone bill, you don't have to register and don't have to pay in advance.
What are the options to get a linux laptop on the net in the USA? Since some of you guys are living there, you may answer the question. A quick glance on google results has only shown me some Calling Card providers where you buy "points" and then use the web until they are empty. AOL (since it's not linux-compatible) and T-Online global access (since I'm not a customer) are no option.
"If liberty means anything at all, it means the right to tell people what they do not want to hear."
Heh, normally I snip sig blocks, but this seems particularly apt to the BitKeeper related mail a little later on this page. -- The Scissors
[Tux] You can do what I do, since I travel a lot: use AT&T. It's $20/month, and they have dial-up numbers pretty much everywhere; AFAIK, more so than any other service. The only caveat is, no outgoing SMTP - you have to use their mail servers to push your stuff out. For most people, that's not a problem; I just find it to be an annoyance. Easy fix: set up one conffile for a smarthost and one for a local MTA and swap them as necessary. For extra ease of use, don't run in daemon mode - just invoke the MTA per-message (and an occasional cron job in case anything gets stuck in the pipe.)
[Swirl] You know, if I faced that situation, I'd rsync or scp my outgoing mail over to my own MTA. That's what Andrew Tridgell does. (For that matter, I usually just ssh over to my MTA box anyway, where I have mutt perpetually running under GNU screen, interacting directly with the local mail sppol.) They don't block port 22 (ssh), do they? That would be a deal-breaker.
[Tux] You know, I just got a shell account with <freeshell.org>; that is an excellent idea. I'd really like the ability to keep 'Fcc's on my laptop, but I can always pull down the files.
[Swirl] Entrusting outbound mail to AT&T's smarthost seems an unjustifiable compromise, in any event. Not acceptable.
[Tux] Parallels my own attitude; however, I didn't have the means to support it until now.
These are interesting thoughts, but quite untrue to their nature, the Gang didn't answer the question as offered. If you know of a purely by-the-call internet provider in the United States, or you happen to be one, chime in, and we'll see that you get noticed. At the moment the very closest I can think of is some internet coffee shops have taken to selling daypasses into their wireless hookup, or sell hourly time while you enjoy the coffee, and hotels in urban areas are starting to offer high speed access, also paid by the day. You'd need a wireless or ethernet card respectively. Europeans please note that the phone system in the US rarely offers direct-plug ISDN - that's considered a business class data line around here. -- The Scissors
I read your 'Greeting' in the latest LinuxGazette online and found it interesting, and correct. The last company I worked for actually switched from cvs, to BitKeeper, for all the cool features. Very long story short, after one year of us debugging their product, paying tens of thousands of $ for the privilege, and never having Larry McVoy stop being a pain in the ass, we dumped them and went back to cvs, and all was well (after they threatened to sue us, etc... at the time we definitely had superior lawyers, and they knew they had no case--but why even threaten? Leaves a bad taste.) Afterward, we missed changesets a bit, but not as much as you might think. And we got so much better performance for "simple things", that it made up for it, in our minds. Oh, and we could save the tens of 000s of $, which was in-line with our whole philosophy, anyhow.
My. I've no idea why large enough corporations think being a poor sport would keep them customers who are on the verge of flying the coop. One would think the other way happens more often - attempts to lure one back and all.
"Large enough corporations" I suppose -- BitMover is/was only a handful of folks. I felt they were very deceptive about the quality of their code (which did improve during the year we spent with it, but should have been that way to start with), and were obviously trying to exploit Linux (by convincing Linus to use BK), as a marketing tool. I don't believe LM has ever contributed to an opensource project, if that tells you something.
My own experiences of these systems have been with the bits only and not so much which the personailities that drive them. I'm all for people having pride in their work... but a little respect around the naighborhood here pays us all back best.
(Search Linux Gazette back archives on the title "The Coin of the Realm" for an interesting editorial on that. Issue 65, I think.)
It was issue 64 actually. OpenProjects has become http://www.freenode.net and if you enjoyed the concept, you might be interested in reading ESR's two papers that came after "The Cathedral and the Bazaar" ... "Homesteading the Neosphere" and "The Magic Cauldron" ... since he explores the anthropologic concepts of "gift culture" and other modes of economics in more academic detail. -- The Scissors
It's too bad BitMover (the company) isn't nearly as cool as well as BitKeeper, the product, is. I told Linus in email (not sure if he ever even read it) way back when that, be careful, Larry is not really a "plays well with others" kind of guy. (This is probably the understatement of the week, but I'm in a charitable mood.)
Heh. Linus has been known to declare himself "not a nice guy" on occasion too. I've always found him gentlemanly, but I wasn't toe to toe with him on the right or wrong ways to implement a deeply integral kernel function, either.
Your comments were right-on, and interesting to read. I don't usually visit the site but maybe I'll try to do so more in the future. (And if you're curious, my current shop is all-windows, that's what I inherited... trying to slowly turn things towards opensource solutions, but it's quite the effort. I thought switching from SourceSafe would be hard, but it turns out they don't even use that!! Wow. We're going to use cvs hosted on linux, with the excellent and free TortoiseCVS windows clients.)
I'm glad you enjoyed my mangled thoughts on it all. Your response goes to show that one of the lessons of open source remains the ability to vote with our feet, ultimately enforced by the right to just plain re-do it ourself.
Let us know if there's any good stuff you'd like to see in our pages!
This is with reference to " Perl One-Liner of the Month: The Case of the Evil Spambots" which was published in th LG#86. I especially enjoyed you defination of Gibberish.
Here is something I found in my fortune files. I am pretty sure wordsmithing in the Marketroid language is done using this procedure.
I wouldn't be surprised at all... Of course now I've just got to turn it into a Perl script.
See attached gibberish.pl.txt
There's something to convince your boss that Perl is the language of choice...
Thanks for writing, Raj - hope you're enjoying the articles! -- Tux
-- Jimmy O'Regan
Extra cool. I loved the reader's comments. -- Tux
This looked interesting enough to toss the clipping in. Maybe we should have stuffed it in News Bytes, but the air compressor wasn't in at press time. The "groups of linux users everywhere" is a list of LUGs and service also hosted at SSC. -- The Scissors
Some of you may have seen the recent story by ESR on NewsForge about SCO suing IBM for billions over IBM's "disclosure" of SCO intellectual property to the "free software" community.
In a nutshell, SCO bought the Unix source and related IP from Novell in 1995. Caldera (which was never much of an "open source" company) recently became SCO, and since then, they have been looking high and low for who they could sue over their IP. A rumour surfaced a while back that they had retained David Bois to sue people, which they promptly denied. Now, guess who's leading the charge against IBM? Yep. SCO has become openly hostile to the Open Source community, and this looks like the desperate effort of a dying company to grab money by suing people rather than making a better product. IBM has the deepest pockets, so they get sued first.
Anyway, the whole point of this is that I recently received a package of SCO Linux software for distribution to my LUG. You may have received such a package as well. If so, I would encourage you to send it back to SCO with a note explaining (lucidly) why. I don't know that it will ultimately do any good, but maybe it will get their attention.
Paul M. Foster
Suncoast Linux Users Group (SLUG)
I am sorry if this message is in HTML format; Hotmail doesn't give a plain text option, so I don't know what it's doing.
I believe you need to see: http://expita.com/nomime.html#hotmail -- Swirl
I enjoyed reading the letters page. It reflected the range of responses I received quite well. Just for your information, over 500 of your readers downloaded my library during February.
On behalf of our missing Editor Gal, thanks, Stephen! It's good to know we snipped it just right. Loyal readers, I've also snipped the ensuing fragmented discussion about the nature of languages that sprung up among TAG ... you'll probably see something of that in a later issue. -- The Scissors.
What a week. I looked high and low and all over the offices. Those editors have gone and left me here to deal with everything on my own. Luckily I have quite a bit of practice at the grindstone and keeping my wits sharp.
I thought I was being a shear genius when it occurred to me to check the back room and see if the Answer Gang was in there. This is their column, after all. I'm sure they'd lend me a hand. I can snip right through this thing.
Imagine my surprise when there just aren't any people to be found around here at all. They must be off at a conference or something. I can't even find Ben's dark glasses.
Ok. fine. I can do it. I'm the Editor's Scissors and I've seen all of the good stuff that have ended up in /dev/cuttingroom as the great stuff makes it to print. I raided the loose bits on the desk (I made nice use of this old SCSI adaptor for that!) and I've rounded up a few buddies to make up for the missing Answer Gang. I do believe you'll recognize a few of these characters. To introduce any that you don't recognize, I've (as per the editing guidelines) included their bios.
For extra credit, if you can solve the mystery of who all these figures are standing in for, feel free to send our staff a note. I'm sure when they've stopped fooling around they'll see I've done a fair job.
Hoping to be back in the editor's hands next month... have a great one, readers!
Snippings Provided By The Wizard's Hat
From Billy a.k.a. CustomerMarket
I am trying to configure my two computers With Linux and Windows 2000 into network. I am using DSL modem and router. I would really appreciate if somebody can spare a few ideas because I am on verge of breaking my head. (Not literally thou)
Thank you all
[Wizard Hat] Okay. You install and configure Linux and connect it to your network. Then you install MS Windows 2000 on the other computer and connect it to your network.
I'm going to make a wild ass guess that your DSL modem/router is doing IP Masquerading (a particular form of NAT, network address translation) and it problem offers DHCP services on it's "inner" (or LAN, local area network) interface --- leasing out a set of RFC1918 "reserved" addresses (192.168.x.*, 10.* or 172.16.*.* through 172.31.*.*). So, you can probably configure both computers to just get their networking information from the router dynamically (automatically).
The exact details of configuring your router, and W2K for this are beyond our purview. Talk to your ISP or refer to the router's documentation for the former. Call Microsoft or find a Microsoft-centric support forum for the latter.
The precise details of configuring Linux to use DHCP depend on which distribution you use. In general the installation programs for mainstream distributions will offer this option in some sort of dialog box or at some sort of prompt. That's the easiest way of doing it (easiest meanining: "requiring the least explanation in this e-mail"). You haven't said what distribution you're running, so I couldn't offer more specific suggestions without having to write a book.
This all seems pretty obvious. I suspect that you have some other needs in mind. However, we haven't installed the telepathy protocol daemons in our little brains yet. So we can't hazard a guess as to what you mean by 'configure.'
I might gues that you want to do file sharing between the two: read a book on Samba to let Linux export/share some of it's disk space (filesystems and directories) to the MS Win2K system and perhaps looks for a chapter or so on smbfs for Linux to "mount" (access) shares from the W2k system (i.e. to go the other way).
I might guess that you want to access your Linux system, particular it's command line interface from your Windows desktop system. In that case download and install Putty (the best free ssh client, I would say the best ssh client all around, for MS Windows). That will allow you to "ssh" into your Linux system (open command prompt windows to administer it and run programs there from. You might even want to remotely access graphical Linux programs from the Windows box (or vice versa). In that case you'd probably want to look into VNC (virtual network computing --- actually a rather silly name). VNC clients and servers run under Linux (and other forms of UNIX) and MS windows, and there is a Java client that can even run from a web browser.
There are numerous other ways to do each of these, BTW. You could install NFS clients on the Windows side for filesharing (those were all commercial last I heard). You could use the MS Windows telnet clients and install and configure the deprecated (as in "insecure, use at your own peril) telnet service (daemon) on the Linux side for character mode (terminal and command line) access. And you could get X servers for MS Windows --- most are commercial, and/or you could run rdesktop for Linux to access the MS Windows "Terminal Server" features (however the Terminal Services are an expensive add-on for Windows, as far as I know). In other words, Samba/smbfs, Putty/ssh, and VNC represent a set of services that provide file, command, and remote graphical support between the two systems using only free software and well known software at both ends.
I might provide more details on how these packages could be used. However, each of these is just a shot in the dark at what you might be looking for. So I've spend enough time on the question.
Here are a few URLs you can use to read more about these packages:
Please note: anything I say about MS Windows is likely to be wrong. I haven't used MS Windows regularly for almost 10 years. At the last couple of places where I worked or contracted that put MS Windows systems on my desk (to access Exchange for their e-mail and groupware/sheduler functions) I found that I barely used them --- e-mail, browser, and PuTTY were as much as I ever used on any of them. I'm almost exclusively a UNIX/Linux administrator and programmer, so I deeply lost touch with the whole Microsoft based universe.
- Samba: http://www.samba.org/
- OpenSSH: http://www.openssh.org/ (Both of the preceding packages are included with every major mainstream Linux distribution by default. SSH is often installed and configured automatically these days --- just check the appropriate box during your Linux installation, Samba may require somewhat more manual configuration).
- Putty: http://people.nl.linux.org/~bjs/putty/download.html (SSH client for MS Windows can be installed by just dropping one .EXE file into any directory -- optionally on your PATH. --- other optional components are similarly easy to install)
- Cygwin: http://cygwin.com/ (Environment to support UNIX and Linux software, compiled and running natively under MS Windows. I mention it here primarily because they have a list of packages that have already been ported --- ssh clients and servers in particular. Note: the level of integration and interoperation between the Cygwin environment and the rest of MS Windows can be frustratingly rudimentary. It can be confusing and the Cygwin environment can feel like an isolated subsystem of the Windows box; almost like being on a different machine at times).
- VNC: http://www.realvnc.com/ (Included with many distributions, but usually not installed by default. You have to install and configure it manually).
- TightVNC: http://www.tightvnc.com/ (An enhanced version of VNC, also free under the GPL. Might be better on the MS Windows side as client and server for the Win2K box)
- rdesktop: http://www.rdesktop.org/ (A client for the MS Windows RDP (remote desktop protocol), which is apparently derived from the Citrix ICA protocol. The client runs on Linux or UNIX. Might require special MS Windows softare or licensing on the server side).
This was posted in the open forums attached to "Langa Letter" -- one of
the InformationWeek regular
columns. The Answer Guy's actual reply is what's sitting here in my
clippings-box; the column which he is replying to was
Fred Langa / Langa Letter: Linux Has Bugs: Get Over It / January 23, 2003
Fred's comment about "severity" is, as he points out, inherently subjective. His numerical analysis is also subject to more issues that he's simply ignoring.
For example the 157+ bug count for RH 7.2 or 7.3 includes fixes for many overlapping products and many which are rarely installed by Linux users -- RH simply includes a lot of optional stuff. Meanwhile the count for Micrsoft may still be artificially low, since MS is known to deliberately minimize the number and severity of their bug reports. Many of their 30+ reported patches might include multiple fixes and descriptions which downplay their signficance.
Fred also, inexcusably, argues that "first availability" of a fix (in source form, sometimes in focused, though public, mailing lists and venues) "doesn't count" as faster. That is simply jury rigging the semantics to support a prejudiced hypothesis.
Another approach to looking at the severity of bugs is to view the effect of exploits on the 'net as a whole.In the history of Linux there have only been a couple of widespread worms (episodes where a bug's exploit was automated in a self-propagating fashion). Ramen, Lion and Adore are the three which come to mind.
Subjectively the impact of these were minimal. The aggregate traffic generated by them was imperceptable on the global Internet scale. Note that the number of Linux web, DNS and mail servers had already surpassed MS Windows servers by this time --- so the comparison is not numerically outrageous.
Compare these to Code Red, Nimba, and the most recent MS SQL injection worms. The number of hosts compromised, and the effect on the global Internet have been significant.
I simply don't have the raw data available to make any quantitative assertions about this. However, the qualitative evidence is obvious and irrefutable. The bugs in MS systems seem to be more severe than comparable bugs on Linux systems.
If a researcher were really interested in a rigorous comparison, one could gather the statistics from various perspectives --- concurrently trying to support and refute this hypothesis.
Fred is right, of course, that Linux has many bugs --- far too many. However, he then extends this argument too far. He uses some fairly shoddy anecdotal numbers, performs trivial arithmetic on them and tries to pass this off as analysis to conclude that there is no difference between MS XP security (and that of their other OSes) and Linux' (Red Hat).
I won't pass my comments off as anything but anecdotal. I won't look up some "Google" numbers to assign to them and try to pass them off as statistical analysis.
I will assert that Linux is different. That bugs in core Linux system components are fewer, less severe, fixed faster, and are (for the skilled professional) easier to apply across an enterprise (and more robust) than security issues in Microsoft based systems.
The fact that numerous differences in these to OSes make statistical comparison non-trivial doesn't justify the claim that there is no difference.
Further anecdotal observations show that the various Linux distributions and open source programming teams have done more than simply patch bugs as they were found. Many of the CERT advisories in Linux and elsewhere (on the LWN pages, for example: http://www.lwn.net/ ) are the result of proactive code auditing by Connectiva, Gentoo, S.u.S.E., IBM and The MetaL group at Stanford, among many others. In addition many of these projects are signficantly restructuring their code, their whole subsystems, in order to eliminate whole classes of bugs and to minimize the impact of many others. For instance the classic problems of BIND (named, the DNS server) running as root and having access to the server's whole filesystem used to be mitigated by gurus by patching and reconfiguring it to run "chroot" (locked into a subdirectory tree) and with root privileges dropped after initial TCP/port binding (before interacting with foreign data). These mitigations are now part of the default design and installation of BIND 9.x. Linux and other UNIX installations used to enable a large number of services (including rsh/rlogin and telnet) by default. These services are now deprecated, and mainstream distributions disable most or all network services by default and present dire warnings in their various enabling dialog boxes and UI! s). before allowing users to enable them.
These changes are not panacea. However, they are significant in that they hold out the promise of reducing the number and severity of future bugs, and they artificially inflate recent statistics (since the majority of this work as been over the last two or three years).
Fred will undoubtedly dismiss these comments as being more "rabid advocation" by a self-admitted Linux enthusiast. He may even point to MS' own widely touted "trustworthy computing" PR campaign as evidence of a parallel effort on "the other side of the Gates." However this message isn't really written to him.
It's written to those who want to make things better.
The real difference between security in MS and in Linux is qualitative rather than quantitative. With Linux every user and administrator is empowered to help themselves. Every one of us can, and many more of us should, accept a greater responsibility for our systems and their integrity and security. Linux users (including corporations, governments and other organizations) can find and fix bugs and can participate in a global community effort to eliminate them and improve these systems for everyone.
Let's not get wrapped up in blind enthusiasm and open source patriotism. But let us not fall prey the the claim that there is no difference. There is a difference and each one of us can be a part of making that difference.
From Licht Bülb
Answered By Dolavimus the platypus, Pretzel, Virtual Beer, Konqi, Tuxedo T. Herring, Swirl, corncob Pipe, the Scissors, Amanda the Panda
I'm fiddling on my laptop again (almost all things are working) trying to get s-video out working. In the course of this fiddling I'm raking wildly in the bios which sometimes screws up the display (and I am still able to login via ssh) or the machine freezes. Then I have to "push the button" and - since I was forced to use ext2 (ext3 accesses the hd every 5 secs -> spindown and therefore power saving impossible) I have to wait quite some time for the fsck to finish with my 10 GB root system (yeah, now I know why to have multiple partitions..). So,
does any of you know a jounaling fs which plays nice with laptops? I googled a bit, read stuff, but didn't find anything about it. I think I remember someone here in the TAG saying that reiserfs had some patch to play nice? Can someone confirm that?
Cheers and TIA
[Dolavimus] I've been using rfs on my laptop since new two yrs ago with good results. Although I haven't done anything in particular to address hd spin down for power economy.
[Amanda] I have actually used ext3 with a recently installed Debian "testing" distribution. The hdd access can be "reduced" by installing "noflushd".
noflushd doesn't work with any j-fs. It says so on the web page.
[Amanda] However, I confess that I haven't really had a chance to fully examine this issue. As regards partitioning the disk the following works well: /, /usr, /boot as ext2 mounted read-only. Of /tmp, /var and /home which need to be writable only /home really is usually large enough to require journalling ( and sometimes /var ).
I have tried several things over time: only one root and a small boot, full monty with all dirs on seperate partitions and some things in between. The prob with several partition: when you need some large space (naturally) none is there on a single partition. Across two or more there would be enough... Disadvantage of only one /: you can not unmount anything beforehand if you know you gonna crash the system now... And a crypto-fs is hard to make then too.
[Pretzel] I think the very idea of a journaling filesystem makes "play[ing] nice" impossible. Journaling filesystems have to access the hard drive on every write. More accurately, they have to access the journal device each filesystem write.
Well, it might be necessary for the j-fs to write to it's journal every fs access, but ext3 writes to hdd every 5 secs, regardless of fs access or not.
[Pretzel] I think most journaling filesystem in Linux have an option for journaling device which is normally on-disk but can be on any block device, at least with ext3 and reiserfs. Some non-volatile memory would do nicely, but on a laptop, I think the chances of being able to do this are almost nil.
I (as a not-knower) would have two ideas: a compact flash card in the pcmcia slot and - RAM! In case of the RAM the journal would then be written to hdd when also a normal hdd access takes place. True, this would be bad if a system outage would occur without the journal having been rewritten to disk but I would take these chances... In case of CF, dunno if you can plug them in straight away or if you need an adapter, but 16MB are really cheap anmd if you can save some power (-> time) with it...
[Pretzel] Doing it in RAM would effectivly make a journaling fs useless. What would be the point then? That's the same effect as using a non-journaled filesystem.
Well, the journal would get written to disk with the data. If you use noflushd then writes of the system (logging etc.) get postponed or get written to RAM and then get written to hdd if a normal write (user initiated) occurs. So, I dunno exactly how noflushd does this but when it redirects the writes to ram the journal entries (if they need to be made in that case) should be written to ram too.
[Scissors] This isn't so much "writtem to RAM" like scribbled in a ramdisk - it's more like being hidden in the RAM of a caching controller. That's all noflushd does, is allow some buffering at the filesystem-driver level. So if something really does have content for the disk - and yes, that includes its journals - it's either got to hit the disk eventually, or you get to bear the risk that something might fail before it does.
But the whole point of having a journal is to have it still be present after a reboot event made something which normally isn't volatile space, lose its cookies. Having a journal that isn't allowed to do its job just complicates matters. Ergo, it shouldn't be put on volatile RAM.
[Pretzel] Another possiblily compiling a kernel with magic sysrq support, if the machice isn't totally frozen, you could do an emergency Sync/Unmount/Reboot.
OK, now I have to admit it: HOW THE HECK DOES THIS WORK? I read the stuff in /usr/src/linux/docs/ but as much as I gathered Alt-Print d should do something? It doesn't in my case. I compiled this in (if it's only the Magic SysRq key in Kernel hacking) some time (not in my current kernel I see now) but then (I tested it) it didn't work.
[Pretzel] Worked for me. (I say worked because at the moment I don't have it compiled it.) Try Alt-Sysrq-<magic sysrq command> (all three at the same time.) I don't remember if it needed all three at once or not. sysrq probably only says "print screen" on some keyboards.
Yeah, the prob was all at the same time... Now it works.
[Scissors] At least part of the confusion is with SysRq - on some keyboards the SysRq lives as a subfunction on another key. Thus for such keyboards you'd also need the extra key that invokes the secondary keycode. Fn maybe.
When it works then you should be able to (for instance) press Z and get a little help list. In fact any character that doesn't do something is supposed to show the little list. What I'm not clear on is "get outta sysrq mode"...
I've sometimes seen a console get into a state where it would respond to Magic SysRq, but it couldn't get out of that mode anymore. So I hope you have some spare virtual consoles, if you are just using it to settle something simpler than "telinit 6 doesn't work".
[Pretzel] So, in summary: No.
A No? <shout> I DON'T ACCEPT NO "NO" ! </shout> (yes, I've been in the army..)
[Beer] Why not plug one of these USB memory sticks and keep the journals there? That way the immediate access is only required for the USB device and not for the actual HD.
Cool idea, hadn't thought of this one, although I recently bought one! <me stupid>
[Beer] Which one? I've seem that Sony ads -- but they tell a lot about some strange cruft with user mangerment and cryptosoftware which suggest custom WinXX drivers to my suspicious ears.
The cheapest one I could find. It was 49 Euro, super slim and works like a charm. I think I will use gpg to encrypt files on the thing and simply put a win gpg version on the stick too.
[Beer] If I ask for a drive give me drive and not an encryption device where I can't trust the encryption anyway....
[Beer] Might still need patches, but if the journal keeping actually needs only access to the journmal file and is not accessing the HD where the data are finally going anyway -- It might even work with existing journal/kernel code.
Then on the other hand you want journaling for some testing period -- why is power saving durting that time so important? You could always switch back to unjournaled if power saving is important. ext2/3 in that case would be easiest as a cleanly unmounted ext3 is mountable as ext2 without problems. The switching between ext3 and ext2 might be done by boot options I guess. (can you ask in a init.d script for boot options ?)
Well, the case is pressing just now in this testing period. I just rebooted and had a full fsck (with Ctrd-d at the end) just now and I really need a j-fs NOW. But I like the idea of j-fs's generally so I would like to keep it after the fiddling too. You are right that ext2|3 would work now but I am a little bit burnt with that since I had ext3 in the very beginning, installed the system (a lot) and then wanted to switch to ext2 (with removing the journal) according to manual. But the ext-tools weren't current at that time in testing|sid so this broke (!) the fs with FULL reinstall (some bad things occured at the same time leading to this). But ext2|3 would be an option.
[Beer] I never tried to completely switch back to ext2 and delete the journals, but I did mount ext3 as ext2 and nothing bad happened. I made sure they are clean so.
On Laptop power save is only a real issue on battery -- then the sudden crashes should not be that frequent unless you insist of fiddling around in kernel space on the train.... So for some normal usage running on battery and ext2 might be sufficient, but when you are on mains power you can switch back to ext3. There must be a way to make this decission in lilo by some option which is evaluated in init.d/boot*
Or check if you are on battery and make the decision based on that. Frequent sleep/resume inbetween are then not that practical so.
[Konqi] For those living in Germany: Aldi has an 128 MB USB memory stick for 49.99 Euro next week, I read in an Heise announcement.
And for those who want to know something about mine: http://www.computer-cash-carry.de/ccc/index-ns.html see Festplatten IDE, there at the bottom (all german, just for the curious)
[Tux] Any of you guys want to write up an article on what it takes to get one working, or is it simple enough to describe in a 2-cent tip? I'd be very curious.
modprobe sd_mod modprobe usb-storage mount /dev/sda1 /mnt/usbstick
[Tux] Wow. Mon