[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: /proc/loadavg



On Wed, Oct 23, 2002 at 06:13:49PM -0500, Jeff Licquia wrote:
> The load average equals the average of the number of tasks running or in
> a runnable state at any instant, as opposed to zombie tasks, tasks
> waiting on an operation such as I/O, suspended tasks, etc.

Actually, processes waiting on I/O are counted in the load average,
which is why it spikes when an NFS server is down, a disk goes
off-line, or a box just gets insanely I/O bound.

> So there's no magic rule about load averages.  You could have two
> single-CPU boxes, and the one with a 1.5 load average could perform much
> worse than the one with a 4 load average.  I've heard of boxes with load
> averages in the three-digit range (before the decimal) that are still
> usable.

My UML hosting server was running with a steady load average between
1.5 or so and around 6 because it was so I/O bound recently.  I talked
about this a bit at the meeting last night...  After moving the TMPDIR
for UML to tmpfs, the load is a little more reasonable.

  9:24pm  up 4 days, 13:30,  3 users,  load average: 0.11, 0.14, 0.08

Like I was saying at the meeting last night, UML (User Mode Linux)
creates a file of the size specified by mem= on the command line,
mmaps it, unlinks it, and uses the mmap'd file as its memory.  Even
with disk caching, at about 10 virtual servers running on one box, the
insane amount of I/O had the server absolutely crawling.  Now, with
tmpfs, usually memory accesses in UML just translate to memory
accesses on the host, and at worst a little swapping.

Not that all that has much to do with the topic at hand.  I just
thought I'd share an example.  :-)

Steve
-- 
steve@silug.org           | Southern Illinois Linux Users Group
(618)398-7360             | See web site for meeting details.
Steven Pritchard          | http://www.silug.org/

-
To unsubscribe, send email to majordomo@luci.org with
"unsubscribe luci-discuss" in the body.