[Prev] Thread [Next]  |  [Prev] Date [Next]

Re: [cros-dev] Proposed OOM improvements. Luigi Semenzato Thu Aug 05 12:00:36 2010

On Thu, Aug 5, 2010 at 11:09 AM, Greg Spencer <[EMAIL PROTECTED]> wrote:
> On Thu, Aug 5, 2010 at 9:14 AM, Will Drewry <[EMAIL PROTECTED]> wrote:
>> On Wed, Aug 4, 2010 at 8:11 PM, Luigi Semenzato <[EMAIL PROTECTED]>
>> wrote:
>> > I suspect there is one issue you may want to consider even before you
>> > get to the ones you mention.  We've had reports of "extreme slowness",
>> > and I was able to reproduce such situation in the past.  The slowness
>> > (and pegged disk activity) is consistent with thrashing due to code
>> > paging.  Even though we don't use swap, the kernel will still reclaim
>> > read-only executable pages since they have a backing store (the
>> > executable file).  I suspect this may make the system unusable before
>> > you get into an actual OOM situation.
>> Out of curiousity, would this still be the case if Chrome was running
>> with rlimits?  Will it still attempt to swap out read-only executable
>> pages to keep memory use under that bar or will it just start
>> returning malloc failures?
> Good question.  I'll have to look into that some more.  I know that cgroups
> will reclaim from the cgroup LRU list when it approaches the limit however,
> so maybe that's the way to go.
> One issue with setting resource limits is figuring out what to set them to.
>  We'd have to be constantly tweaking the values whenever the system code
> changes (larger/smaller system memory use can come from anywhere).
> Seems like what you'd want to do is measure how much memory the browser and
> system  (I'm including X and the window manager in the "system")  are using,
> and set the limits for the renderer and plugin processes to give the browser
> and systemsome headroom.  But it would have to be dynamic, since the browser
> grows with more tabs, etc.
> My gut feeling (although I definitely could be wrong here) is that we'd end
> up with behavior similar to the OOM killer strategy in the long run --
> renderers would die before the browser and system processes, it would just
> be a different cause of death.
>> When I did some very informal testing a few months back, running
>> chrome with 90% of the system memory and opening many, many tabs
>> resulted in sad faces, but no thrashing.  But that was unscientific
>> and I never ended up exploring the rlimit v OOM code in the kernel
>> (thus the question).
> Yes, this is my experience with limited testing as well.

That's interesting.  I never got sad faces during my tests, only slow
response and pegged disk activity.  This is from last November.  I was
running on a white eeepc.  My testing strategy was to go to Google
News and control-click links as fast as I could.  Then I'd go to some
of the new tabs and control-click more random links.  A lot of the
pages had Flash-based ads.

Chromium OS Developers mailing list: [EMAIL PROTECTED]
View archives, change email options, or unsubscribe: