Files
oldlinux-files/ftp-archives/tsx-11.mit.edu/1996-10-07/mail-archive/linux-devel/Volume2/digest293
2024-02-19 00:24:15 -05:00

550 lines
21 KiB
Plaintext

From: Digestifier <Linux-Development-Request@senator-bedfellow.mit.edu>
To: Linux-Development@senator-bedfellow.mit.edu
Reply-To: Linux-Development@senator-bedfellow.mit.edu
Date: Wed, 12 Oct 94 00:13:09 EDT
Subject: Linux-Development Digest #293
Linux-Development Digest #293, Volume #2 Wed, 12 Oct 94 00:13:09 EDT
Contents:
FTP slowdown under 1.1.52 with hdparm on (Garth C. Nielsen)
Re: ext2fs vs. Berkeley FFS (Chris Bitmead)
Re: Linux 1.1.52 (Lies, Damned Lies, and Benchmarks) (Michael O'Reilly)
Re: SMail security hole? (Davor Jadrijevic)
[Answer?!] Re: ext2fs vs. Berkeley FFS (Stephen Tweedie)
Re: tty bug and fix (for 1.1.51)
Re: A badly missed feature in gcc (Jeff Kesselman)
Anyone working on improving NFS? (Matthew D Stock)
Re: Linux killed my floppy drive! (Ahmed Naas)
Re: A badly missed feature in gcc (Steven M. Doyle)
----------------------------------------------------------------------------
From: gnielsen@clam.rutgers.edu (Garth C. Nielsen)
Subject: FTP slowdown under 1.1.52 with hdparm on
Date: 11 Oct 1994 21:09:10 -0400
Hello,
Not that I am complaining or anything. I just wanted to state that
I had a drop in transfer rate on my FTP transfers while using hdparm.
I am using 1.1.52 and running a SLIP line on a 14.4 modem. Normally I
get about 1.4 K/sec. But while running it with hdparm -m 32 the
ftp transfer stopped for a few seconds after each disk write. After
I set the hdparm -m0 then the ftp ran fine again. Any explainations?
Garth
P.S While compiling 1.1.52 should I have copy files from /asm-386i
into /asm? Cause it would not compile without that.
------------------------------
From: chrisb@stork.cssc-syd.tansu.com.au (Chris Bitmead)
Subject: Re: ext2fs vs. Berkeley FFS
Date: 11 Oct 94 17:46:51
In article <37bkbd$mjs@babyblue.cs.yale.edu> hstrong@eng1.uconn.edu (Hugh Strong) writes:
>Michael Bischoff (mbi@math.nat.tu-bs.de) wrote:
>: In article <ADC.94Oct9144148@bach.coe.neu.edu> adc@bach.coe.neu.edu (Albert D. Cahalan) writes:
>: I think 3 or 4 bits could be spared for extended attributes like this:
>
>: 0 flat file (default)
>: 1 NEW record file
>: 2 Mac file
>: 3 NT file
>: 4 OS/2 file
>: 5 DOS executable
>: 6 Windows executable
>: 7 etc.
>: Hi,
>: this would be against the philosophy of UNIX: less is more!
>: The success and flexibility of UNIX is due to the fact that
>: all I/O is done through files: ordinary files, hard disks, printers
>: are all accessable through the same open() and write() calls.
>: The mess of DOS is that you need special programs if you want
>: to read A: as entire image.
>
>: Michael
>: --
>: EOI
>: ----------------------------------------------------------------------------
>: Michael Bischoff e-mail: mbi@mo.math.nat.tu-bs.de
>: Abt. Mathematische Optimierung or m.bischoff@tu-bs.de
>: Inst. Angewandte Mathematik or on.bischoff@zib-berlin.de
>: Pockelsstrasse 14 Tel. +49-531-391-7555, Fax: +49-531-391-4577
>: 38106 Braunschweig Germany
>
>I think some people may misunderstand what I an suggesting.
>Introducing a new set of namespaces like A:,C: and LPT1: would be
>an intolerable barbarity. Certainly no one accustomed to UNIX would
>think of using it. But what about extending the semantics of
>existing calls? This has occured many times in the UNIX world. This
>is precisely what happens every time someone writes a driver
>with a new ioctl() call.
>
>For instance, to open the main (data) fork of a file, one
>might write
>
> fd = open("MyDataFile",O_RDONLY);
>
>The icon (for a window manager) for the file could be
>accessed by the following call.
>
> fd1 = open("MyDataFile:ICON",O_RDONLY);
>
>The state of an editing session on the file could be
>saved in yet another fork
>
> fd2 = open("MyDataFile:EDITSTATE",O_RDONLY);
>
>It is the FILESYSTEM code that grasps the semantics of what
>we are doing, not other parts of the kernel. If some of
>this functionality can be exported to user space, so much
>the better.
But the question still remains: Why do you want this???
You say you would like a "main fork" in a file and then various
"attribute" forks. Why this is better than a directory I don't know.
Why should there be one "main" fork? And why are you too lazy to use cp -r
to copy them?
What if you start to want forks with sub-forks. Soon you'll start to want
the full facilities of directories, and we might as well leave it the way
it is.
Don't be influenced by the over-featurism that NT offers. There's no need
for this crud.
------------------------------
From: michael@iinet.com.au (Michael O'Reilly)
Subject: Re: Linux 1.1.52 (Lies, Damned Lies, and Benchmarks)
Date: 11 Oct 1994 17:58:04 +0800
Jeff Kuehn (kuehn@citadel.scd.ucar.edu) wrote:
: Hi All!
[ ... ]
: 1.1.51|1. |1. |1. |1.1|1.2| .8| .8| .6| .9| .7|1. |1.1| 1. |1. |1. | .9|15.2
: 1.1.52|1. |1. |1. |1.1|1.2| .8| .8| .6| .8| .5| .7|1.1| 1. |1. |1. | .5|14.1
^^^
This makes no sense at all. There was no changes to the syscall
interface from 51 to 52 (at least, that I saw in a glance thru the
diff). I think I'll take those figures with a very large pinch of
salt.
Any change you could run the test twice on 52 and 51 again and see how
stable the results are? i.e. whats the variance in these numbers?
Michael.
--
Michael O'Reilly @ iiNet Technologies, Internet Service providers.
Voice (09) 307 1183, Fax (09) 307 8414. Email michael@iinet.com.au
GCS d? au- a- v* c++ UL++++ L+++ E po--(+) b+++ D++ h* r++ u+
e+ m+ s+++/--- !n h-- f? g+ w t-- y+
------------------------------
From: davj@ds5000.irb.hr (Davor Jadrijevic)
Crossposted-To: comp.os.linux.help
Subject: Re: SMail security hole?
Date: 10 Oct 1994 12:10:21 GMT
: BTW: Its a 30 second recompile of src/main.c to fix the -D bug. Someone left
: the argument out of the list that causes the program to unsetuid.
Could you be so kind and upload binary version on e.g. sunsite :)?
Best regards, Davor.
--
<davor%emard.uucp@ds5000.irb.hr>, <davj@ds5000.irb.hr>
================ Davor Jadrijevic ====================
------------------------------
From: Stephen Tweedie <sct@dcs.ed.ac.uk>
Subject: [Answer?!] Re: ext2fs vs. Berkeley FFS
Date: Tue, 11 Oct 1994 14:51:12 GMT
Hi folks,
There's been a thread going on recently about ext2fs's performance
versus BSD's ffs. As one of the ext2fs maintainers, and the implementor
of the defragmentor and ext2fs performance code, here's a brief attempt
at an answer...
In article <36lqt6$t80@babyblue.cs.yale.edu>, hstrong@eng1.uconn.edu
(Hugh Strong) writes:
> Just wondering - How does the performance of Linux ext2fs compare with
> that of the Berkeley Fast File System (FFS) found in {386,Free,Net}BSD
> and other BSD variants? A number of posts to the 386BSD groups have
> recently sneered at ext2fs, presumably because of the considerations
> FFS makes for drive geometry and rotational parameters, which seem to
> be absent in the ext2fs source I've examined. Does anyone have any
> concrete performance statistics to back/refute a these claims? Is
> anyone workrking on FFS for Linux?
In general, Linux's ext2fs is significantly faster than ffs. I don't
have hard performance data right beside me, but I can get it if you
like. From memory, ext2fs is typically 10% to 50% faster than ffs for
general use; some operations (such as unpacking a large tar archive) can
achieve a much greater speedup.
"Bill" == Bill Broadley <broadley@turing.ucdavis.edu> writes:
> It would seem that with increasingly intelligent scsi drives, and
> increasingly large on disk caches. (1MB aren't uncommon) that it
> isn't as necessary to know the hardware details of a drive because
> they are hidden from you by onboard cache policies, variable recording
> rates etc.
> Back when drives has no cache using the geometry of a drive was a big
> speed win, but I suspect it's less so today.
+ Absolutely: ext2fs is optimised for linear block allocation.
Drive geometry is no longer a significant factor in getting good
performance. Modern drives do read-ahead anyway, so BSD's idea of
"rotationally optimal alignment" doesn't work. With drives doing
read-ahead, the optimum block placement strategy is to allocate blocks
sequentially as much as possible.
If FFS cannot allocate a block's successor linearly, it will try to
place it in a sector in the same rotational position on the same
cylinder. Ext2fs, on the other hand, tries to find a new location
where it can allocate several contiguous blocks to maintain as much
linearity as possible. This is a *huge* win in terms of raw
sequential performance. Ext2fs's allocation algorithm also renders it
much less sensitive to filesystem fragmentation as partitions get full
(after defragmenting a 99% full, 12 month old, very active filesystem
for the first time, I observed a general speedup for things like
kernel compiles of no more than about 5%).
"Mike" == Mike Haertel <mike@dogmatix.cs.uoregon.edu> writes:
> The Linux community may sneer at synchronous inode updates, but under
> BSD ffs I have never lost a file, which is more than I can say for
> ext2fs, which has cost me a whole partition at least once, simply due
> to its overoptimistic buffering.
> In fact, this was the issue that drove me away from Linux (to NetBSD)
> for over a year. I have only recently returned to the Linux fold
> since discovering that e2fsck has been dramatically improved.
+ Ext2fs does not perform synchronous update of filesystem metadata.
There is only one absolutely sure way to protect filesystem data ---
ordered writes with two-stage deletes. BSD's synchronous metadata
updates don't actually give your data any better protection than
ext2fs's deferred writes, and ext2fs's recovery tools are (as of
e2fsck-0.5) better than BSD's. Thanks to Ted Ts'o for this!
Although synchronous writing of metadata does ensure the internal
consistency of that metadata, unfortunately it does not give you any
guarantees about the safety of the files' data itself.
I seriously doubt that ext2fs's buffering strategy lost you that
partition. If that happened over a year ago, however, it is not at
all impossible that you fell victim to a kernel bug of some
description :-( You are spot on the mark in criticising the old
version of e2fsck, though.
The new e2fsck uses its knowledge of the fs data structures to recover
data as reliably as ffs can. The difference is in the metadata:
whereas ffs can always recover metadata, ext2fs can occasionally lose
the filenames of some files written just before a crash --- the files
will just be recovered into the lost+found directory. Ext2fs does not
leave data more vulnerable than ffs.
"Mike" == Mike Haertel <mike@dogmatix.cs.uoregon.edu> continues:
> Even so I am not wholly happy--the "clean" bit sometimes seems to be a
> lie. I have simply taken to running a forced fsck every time I boot,
> regardless of the clean bit.
As far as I am aware, the clean bit in ext2fs is completely reliable.
There are some common configuration problems which prevent it from
operating as expected: the most common are faulty shutdown sequences
which do not successfully unmount all your filesystems, and failing to
mount the root filesystem read-only. There can also be problems if you
are running an older version of e2fsck. The clean-bit mechanism has
been implemented for some time now, and there have been no reports in
recent memory of any suspected problems with it.
Cheers,
Stephen.
---
Stephen Tweedie <sct@dcs.ed.ac.uk> (JANET: sct@uk.ac.ed.dcs)
Department of Computer Science, Edinburgh University, Scotland.
------------------------------
From: manolo@fobos.ulpgc.es ()
Subject: Re: tty bug and fix (for 1.1.51)
Date: 8 Oct 1994 17:24:04 GMT
Frank Lofaro (ftlofaro@unlv.edu) wrote:
: There was a bug in the tty code involving EOF and EOF characters that
: was fixed a while ago, but the fix has become ineffective, and the
: bug has returned. The bug involves the fact a false EOF is returned
: when the EOF character is entered with characters waiting (i.e. the
: EOF character is meant as a push, not as a real EOF). The latest
: kernel 1.1.51 exhibits this bug.
: To see the bug in action:
: type dd bs=4
: then enter any multiple of 4 characters, ctrl-d, and the dd will
: terminate with an end of file. A lot of other un*xes screw up on this
: too, but Linux had it fixed, and it was not too hard, and this patch
: is not too difficult, and fixes the problem (and does not appear to
: create any new bugs; I have tested it in cooked and cbreak modes,
: with zero and non-zero min and time values it all seems okay).
: Hopefully this (or a better, more elegant fix) will go into 1.2.0:
: (I removed the old fix, since it does not do any good anymore)
: diff -r -u -N linux.dist/drivers/char/n_tty.c linux/drivers/char/n_tty.c
: --- linux.dist/drivers/char/n_tty.c Sun Sep 25 00:41:14 1994
: +++ linux/drivers/char/n_tty.c Sun Sep 25 21:44:35 1994
: @@ -453,6 +453,8 @@
: goto handle_newline;
: }
: if (c == EOF_CHAR(tty)) {
: + if (tty->canon_head != tty->read_head)
: + set_bit(TTY_PUSH, &tty->flags);
: c = __DISABLED_CHAR;
: goto handle_newline;
: }
: @@ -718,24 +720,6 @@
: *nr -= n;
: }
:
: -/*
: - * Called to gobble up an immediately following EOF when there is no
: - * more room in buf (this can happen if the user "pushes" some
: - * characters using ^D). This prevents the next read() from falsely
: - * returning EOF.
: - */
: -static inline void gobble_eof(struct tty_struct *tty)
: -{
: - cli();
: - if ((tty->read_cnt) &&
: - (tty->read_buf[tty->read_tail] == __DISABLED_CHAR) &&
: - clear_bit(tty->read_tail, &tty->read_flags)) {
: - tty->read_tail = (tty->read_tail+1) & (N_TTY_BUF_SIZE-1);
: - tty->read_cnt--;
: - }
: - sti();
: -}
: -
: static int read_chan(struct tty_struct *tty, struct file *file,
: unsigned char *buf, unsigned int nr)
: {
: @@ -744,6 +728,9 @@
: unsigned char *b = buf;
: int minimum, time;
: int retval = 0;
: + int size;
: +
: +do_it_again:
:
: if (!tty->read_buf) {
: printk("n_tty_read_chan: called with read_buf == NULL?!?\n");
: @@ -858,7 +845,6 @@
: put_fs_byte(c, b++);
: if (--nr)
: continue;
: - gobble_eof(tty);
: break;
: }
: if (--tty->canon_data < 0) {
: @@ -896,7 +882,14 @@
:
: current->state = TASK_RUNNING;
: current->timeout = 0;
: - return (b - buf) ? b - buf : retval;
: + size = b - buf;
: + if (size && nr)
: + clear_bit(TTY_PUSH, &tty->flags);
: + if (!size && clear_bit(TTY_PUSH, &tty->flags))
: + goto do_it_again;
: + if (!size && !retval)
: + clear_bit(TTY_PUSH, &tty->flags);
: + return (size ? size : retval);
: }
:
: static int write_chan(struct tty_struct * tty, struct file * file,
: diff -r -u -N linux.dist/include/linux/tty.h linux/include/linux/tty.h
: --- linux.dist/include/linux/tty.h Wed Aug 10 09:26:44 1994
: +++ linux/include/linux/tty.h Wed Aug 10 09:26:44 1994
: @@ -247,6 +247,7 @@
: #define TTY_EXCLUSIVE 3
: #define TTY_DEBUG 4
: #define TTY_DO_WRITE_WAKEUP 5
: +#define TTY_PUSH 6
:
: #define TTY_WRITE_FLUSH(tty) tty_write_flush((tty))
:
This seems to be already fixed up in 1.1.52...
MGM
------------------------------
From: jeffpk@netcom.com (Jeff Kesselman)
Subject: Re: A badly missed feature in gcc
Date: Tue, 11 Oct 1994 23:55:10 GMT
In article <hpa.0f5d0000.I.use.Linux@ahab.eecs.nwu.edu>,
H. Peter Anvin <hpa@nwu.edu> wrote:
>Followup to: <6447@sparky.mdavcr.mda.ca>
>By author: bruce@mdavcr.mda.ca (Bruce Thompson)
>In newsgroup: comp.os.linux.development
>>
>> APPLAUSE! It's about time someone said something like this. Please
>> don't take this the wrong way folks, but if you want to write _C_
>> code, write _C_ (as defined by the ANSI standard). If, on the other
>> hand you want to write _C++_ code, write C++ code (as defined by the
>> ARM) but _PLEASE_ don't complain that C isn't C++!
>>
>> The // comment syntax is not defined to be part of C, therefore gcc
>> should _never_ accept it as a comment. When invoked as g++ though,
>> it's compiling C++ and therefore // is valid syntax for a comment.
>>
>
>So? There is no law against making proprietary extensions, and *many*
>compilers have added the // comment as an extension to the C language,
>so you cannot trust code that relies on it not being there anyway. It
>has been speculted this extension may make its way into the next
>revision of ANSI C.
This is doubtful. The problem is that making this a 'feature' of ANSI c
will all of a sudden make previously syntacticly correct code now fail to
compile or, worse, compile with a different symantic meaning. This woudl
be DISASTEROUS to the attempt to standardize C.
(Modifying the previous example:)
main()
{
int x;
x = 4//* Isn't this fun?*/ 2
;
printf("%d\n",x);
}
Under current ANSI C, this program is perfectly legal and will print '2'.
With the addition you are suggesting, this code is STILL legal, but now
prints '4'.
Whats REALLY scary about thsi is you have just inroduced a dependancy on
a carraige-return into a language that previously assigned no syntactic
signifigance to a carriage-return beyond that of any other sperator.
(Before you cite macros, remember that these are handle dby the
pre-processor, NOT the compiler proper.)
", but its a very baad design" -- Buckaroo Bonzai
Trying to make Ansi C into C++ is not just pointless, its dangerous.
------------------------------
From: stock@cs.buffalo.edu (Matthew D Stock)
Subject: Anyone working on improving NFS?
Date: Wed, 12 Oct 1994 01:02:35 GMT
Hi. Is anyone currently working on improving NFS under Linux? If I
remember correctly, one of the big reasons it has performance problems is
because the caching is done using disk geometry information, which NFS
doesn't have.
Is my information out of date? In any case, I'm interested in wokring on
the problem, so if you have information on where I should start, please let
me know.
Thanks,
-Matt
------------------------------
Crossposted-To: comp.os.linux.help
From: ahmed@oea.xs4all.nl (Ahmed Naas)
Subject: Re: Linux killed my floppy drive!
Date: Tue, 11 Oct 1994 18:51:20 GMT
Ahmed Naas (ahmed@oea.xs4all.nl) wrote:
: So, did Linux kill my drive or is this one of those rare coincidences?
Ok, I pulled said floppy drive out today and cleaned it as many people
suggested. Result? It is working like a champ again :-)
Thanks to all who responded via e-mail or here.
--
The above is a result of random neuron activity in the writer's brain.
Ahmed M. Naas ahmed@oea.xs4all.nl
======================================================================
------------------------------
From: wcreator@kaiwan.com (Steven M. Doyle)
Subject: Re: A badly missed feature in gcc
Date: 11 Oct 1994 18:33:50 -0700
In <CHRISB.94Oct11172758@stork.cssc-syd.tansu.com.au> chrisb@stork.cssc-syd.tansu.com.au (Chris Bitmead) writes:
>Using comments is foolish IMO. With ifdef you can easily grep though the
>code to remove or look at commented out code. With C comments, it just
>gets lost.
I can't say that I'v ever had that problem. Even with my latest project
(which is about 400K in source so far) the comments prove to be a more
efficient method unless you are looking at more than three lines of code.
For anything over three lines, I tend to use #ifdef/#ifndef constructs also.
--
| Steven Doyle, AKA World Creator | #include <std_disclaimer> |
| Sysop, NETDimension (818)592-6279 | For information on Artificial Worlds |
| wcreator@kaiwan.com | send email to wcreator@kaiwan.com for |
| wcreator@axposf.pa.dec.com | an information package. |
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: Linux-Development-Request@NEWS-DIGESTS.MIT.EDU
You can send mail to the entire list (and comp.os.linux.development) via:
Internet: Linux-Development@NEWS-DIGESTS.MIT.EDU
Linux may be obtained via one of these FTP sites:
nic.funet.fi pub/OS/Linux
tsx-11.mit.edu pub/linux
sunsite.unc.edu pub/Linux
End of Linux-Development Digest
******************************