SETI & the system page file?

Message boards : Number crunching : SETI & the system page file?
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile Cliff Harding
Volunteer tester
Avatar

Send message
Joined: 18 Aug 99
Posts: 1432
Credit: 110,967,840
RAC: 67
United States
Message 2000472 - Posted: 1 Jul 2019, 3:19:00 UTC

I know I'm showing my age, but if this old man's memory is still largely intact, a great many moons ago there was a discussion on whether or not BOINC, in particular SETI, on using the system page file. In the client computing preferences under the disk & memory tab, there's a box to designate the % of swap/page file that BOINC uses. At the time I think that it was mainly determined that to keep the hammering on the relatively slow ( at that time) HDDs to a minimum and the limited ( compared to todays) storage sizes, that the use of this file be kept to the very minimum (1%). I say 1%, because 0% was not accepted by the client.

Fast forward 18 or 19 years and we've come to the age of really fast processors, SSDs, and faster and greater amounts of memory sticks (ram). The question being asked -- Is this thought still viable? Has any further serious thought been given to this in the present days?

For example, on my box the system typically allocates approx. 10147mb (system managed), but rarely uses any of it most of the time. I would hate to see all of the space not being utilized.


I don't buy computers, I build them!!
ID: 2000472 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2000476 - Posted: 1 Jul 2019, 4:30:11 UTC

Everything you point out about the increase of system memory, the increase in storage drive sizes and their access speeds, points to the fact that a system does not need a page/swap file anymore. Only if you are running a application or process that constantly exceeds the system memory is it required. I have never seen any of my default 2 GB swap files ever grow or get accessed with just my standard 16GB of memory. When I install systems now, I skip creating a swap partition or swap file.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2000476 · Report as offensive
Profile Gary Charpentier Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 25 Dec 00
Posts: 30639
Credit: 53,134,872
RAC: 32
United States
Message 2000477 - Posted: 1 Jul 2019, 4:38:39 UTC - in response to Message 2000472.  
Last modified: 1 Jul 2019, 4:42:09 UTC

I know I'm showing my age, but if this old man's memory is still largely intact, a great many moons ago there was a discussion on whether or not BOINC, in particular SETI, on using the system page file. In the client computing preferences under the disk & memory tab, there's a box to designate the % of swap/page file that BOINC uses. At the time I think that it was mainly determined that to keep the hammering on the relatively slow ( at that time) HDDs to a minimum and the limited ( compared to todays) storage sizes, that the use of this file be kept to the very minimum (1%). I say 1%, because 0% was not accepted by the client.

Fast forward 18 or 19 years and we've come to the age of really fast processors, SSDs, and faster and greater amounts of memory sticks (ram). The question being asked -- Is this thought still viable? Has any further serious thought been given to this in the present days?

For example, on my box the system typically allocates approx. 10147mb (system managed), but rarely uses any of it most of the time. I would hate to see all of the space not being utilized.

It is a (mostly) windows thing. Usually when you ask windows to auto manage the pagefile (pagefile.sys) it creates a page file the same size as your physical RAM. If it needs it, it uses it. If it doesn't need it then that space on your hard drive is just empty. You can tell windows that you will manage the size and set it to whatever amount you think is correct. But with hard drives so large today a few gigabytes doing nothing on a drive isn't really worth worrying about.

As to BOINC the setting is there for systems like say a Raspberry Pi that only has 1 gigabyte of RAM. On such a system the page file likely gets a lot of use. If the BOINC setting is less than the available RAM plus available to BOINC swap then the task won't start until some additional memory is available. But pagefiles on Linux (Raspberry Pi) are somewhat different than pagefiles on Doze*. Also don't forget about the setting to leave suspended tasks in memory. That counts to BOINC. As long as you have a reasonable time to switch between tasks set, putting sleeping tasks onto disk isn't really going to affect your RAC. Also it is just a request to the O/S to not swap, the O/S still will swap if it decides it needs the RAM. However it can affect the performance of the computer if you are doing interactive I/O with some other program. While the system is swapping much of everything else may stop.

I think today for 99% of users whatever BOINC sets as the default is as good as any other setting. At least that is my take on it.


*The Linux system can create a page file up to the maximum size of RAM that the processor can address, for a 64 bit processor that is in the exabytes, as long as there is that much free disk.
ID: 2000477 · Report as offensive
Gene Project Donor

Send message
Joined: 26 Apr 99
Posts: 150
Credit: 48,393,279
RAC: 118
United States
Message 2000485 - Posted: 1 Jul 2019, 6:40:10 UTC

@ciff
I'm a Linux host so I'm not much help with Windows system setup. But, like Keith, I have configured my system with NO swap file/partition and I'm also using a kernel compiled with swap support disabled. With 16 GB of RAM I took a chance that nothing bad would happen with regard to the missing swap space and (so far...) I've had no trouble. In the Computing Preferences / Disk & Memory settings I didn't bother trying to minimize the swap space percentage parameter since there isn't any space to manage! Boinc manager didn't complain and all apps run normally. With 7 CPU tasks and 1 GPU support task running in 8 cores I typically see 2 or 3 GB in use, including 400 MB for Firefox, so I feel comfortable with no swap space.
In browsing some of your "stderr" output I notice a lot of messages about "missing MB_clFFTplan... files." It is not critical - just an efficiency thing. The OpenCL driver tries to find previously compiled modules, and if not found then recompiles them for use by the app. It's probably only a few seconds each time the MB app runs and thus not a big deal for your run-times of 200 to 300 seconds.
I see your Seti 20-year mark will be coming up next month. I hope you'll post a note, when the time comes, in one of the 20-year threads here.

Gene;
ID: 2000485 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13727
Credit: 208,696,464
RAC: 304
Australia
Message 2000489 - Posted: 1 Jul 2019, 6:59:00 UTC

These days with Windows the swap file really isn't necessary- unless you have some older software that expects there to be a swap file. If it's not there, and it expects one, then they can get rather upset.
Or if you have a programme that requires huge amounts of RAM, more than you have physically installed- otherwise you can expect not just the programme to get upset, but possibly lockup or crash the system.

I'm wondering how well the LINUX systems running the special application would do without a paging file, particularly those with multiple video cards?
From a WU result-
Peak swap size 19,198.06 MB
Grant
Darwin NT
ID: 2000489 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2000511 - Posted: 1 Jul 2019, 13:49:21 UTC - in response to Message 2000489.  
Last modified: 1 Jul 2019, 14:03:35 UTC

I've never accessed my swap file. I have 3 or 4 gpus in each host. None of them have ever accessed the swap file. Think the swap size printed in the stderr.txt output is misleading.

keith@Serenity:~$ swapon -s
Filename Type Size Used Priority
/swapfile file 2097148 0 -2
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2000511 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20265
Credit: 7,508,002
RAC: 20
United Kingdom
Message 2000518 - Posted: 1 Jul 2019, 15:01:43 UTC
Last modified: 1 Jul 2019, 15:02:15 UTC

The "system page file" sounds like this is on Windows...

Regardless of which OS, in general, enabling/allocating a page file allows your system to utilize more of your RAM to good effect.


If you have enough physical RAM, then the page file never gets used.

However, if you have too little physical RAM at any moment, then your system operation is greatly degraded unless there is swap space to allow physical RAM to be freed up for new memory allocations.


In summary:

A page file is a very good idea, benign if not needed or not used, but a vital system savior when needed.


Considering how inexpensive disk space is, it is silly not to have some swap space.

(A good rule of thumb is to allocate between x0.5 and x2 the size of your system RAM. Or on Windows, just leave it at 'automatic'. Allocating x1 or more system RAM on Linux systems allows for system hibernation if wanted.)


Hope that is of help,

Keep searchin',
Martin
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 2000518 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 2000520 - Posted: 1 Jul 2019, 15:24:07 UTC

Having swap file "just for safe" means trust in OS algorithms for page evictions.
I would not too trust Windows's ones cause saw swaps even when enough physical memory was available. It has too many different user behavior prediction mechanisms to load RAM with useless data and swap useful one to disk.
In case of BOINC + compute intensive app (as SETI) any real need in swap-file will mean enormous performance degradation. App's data accessed constantly.
Single use of swap here is to keep suspended tasks "in memory". But even here one should compare time for re-initialization from state file (state file small so much less drive accesses when on swap but memory structures creation and once-per-task computations should be repeated) and swap from pagefile.
So, for compute-only hosts one should avoid pagefile at all, use large memory pages (to reduce TLB pollution) and use pinned memory (to simplify address translation). Not all of this easely user-controlled, but...
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 2000520 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20265
Credit: 7,508,002
RAC: 20
United Kingdom
Message 2000525 - Posted: 1 Jul 2019, 17:28:34 UTC - in response to Message 2000520.  
Last modified: 1 Jul 2019, 17:29:09 UTC

The Linux OS tends to not swap at all for normal usage. I've run systems for many days where the swap remains at zero bytes.


If you wish to tweak/experiment, there is a simple single parameter called "Swappiness" that you can adjust at any time on a live system (no reboot needed!) to change the sensitivity (or reluctance) to swap.

For normal systems, I've never needed to adjust from the default of 60 "%".

For the sake of easy tweaking, for systems such as the Rasberry Pi running on an SD card, I've set the swappiness down to '10' to ensure minimal wear on the SD card. (On those, Boinc also runs entirely in RAM in tmpfs with a save to f2fs on the SD card every few hours, again to minimise the SD card wear).

All good cool computing!


Keep searchin',
Martin
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 2000525 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 2000715 - Posted: 3 Jul 2019, 5:07:46 UTC

I'll toss my anecdotal evidence into here for the fun of it.

For about 10 years, I ran XP and 7 without a pagefile at all, because I had enough memory to not need it. I'm not concerned about SSD wear because these days, the wear-leveling algo on the SSD itself takes care of that problem. (for further reading: Techreport's SSD Endurance Experiment)

I don't remember exactly where I read it from, but about six months ago, I did read that there IS actually a positive use for pagefile in Windows, even if you're like me and you have 32gb of RAM.

If you're the type of person who actually shuts your computer down frequently (I suspect almost none of you are..but I know some of you do for power costs during peak hours.. or hot summers with no air-con), then this will be of no benefit to you. But for the rest of us who only restart or shut down when absolutely necessary (I average 25-30 days between restarts on my daily driver), this is where pagefile is going to be the most useful.

I know basically all of you are old enough to remember when platter drives saw a benefit to defragging. That is almost a faded memory of ancient times because of SSDs now, but RAM also becomes massively fragmented over time, as one process asks for more memory and has to allocate a block after an existing process, and then that existing process exits and releases its memory pool, that block of memory that it consumed is available for the next process that asks for memory.

This leaves memory pools fragmented and scattered all over the place. Because it is RAM, much like SSDs, the throughput and latency are not a problem--I suspect benchmarks might not even be able to quantify the effect of heavily-fragmented memory pools. The problem becomes that SOME applications and processes have a limit to how many pools they can keep track of, not necessarily the number of bytes they can handle.

So what I was noticing is that after 12-15 days, there would be a LOT of quirky, odd behavior from some processes. No error messages, no crashes, it was just slightly odd behavior. Such as: various games would have one single audio file that seemed to be missing, or one model's texture would fail to load, and asking Steam to verify the integrity of the local game files would successfully pass--no missing files and everything passed the checksums. Load the game back up, and a *different* audio file would seem to be absent, as well as a different missing texture.

And then Firefox and/or Chrome would randomly bog down and lag really hard on a page that never did before. Just really weird behavior.

That's when I had read about what exactly pagefile does in Windows.

In the olden times, it was used for pretending to have more memory than was actually installed. There was a massive performance hit from doing this, of course, but in some cases, it was an absolute necessity. But now that installed RAM has grown to such large numbers, it is no longer fit for THAT purpose anymore.

One of the MAIN functions of the pagefile, however, is garbage collection and housekeeping. It doesn't need much in order to do this, but if you have a pagefile, the 'System' process (the kernel) will actually defrag the memory pools and make them contiguous again, periodically.

When I read about this, I was sitting on 16 days of uptime and some weird things were happening as I described above. I enabled my pagefile (on an old SSD that I don't care about) and set it to min/max of 256 MB, and hit 'apply.' I noticed almost immediately the disk activity LED was lit up solid. I pulled up task manager and went to resource monitor and looked at the disk tab.

X:\pagefile.sys had a steady stream of ~1.5 MB/sec on read and write, doing hundreds of operations/sec. After 15 minutes, all of that activity stopped, and everything felt snappier. Loaded up the game that was being weird just 20 minutes prior.. everything worked as it should.

And now I go 30+ days quite often without noticing any weird behavior.

In order for memory pool defragging to happen, the kernel needs somewhere to be able to MOVE memory pages. It needs to write an identical copy of a page from RAM to disk, then it can release that page in RAM and tell the process that the page is now on disk, and then it can go to the end of an existing page in RAM for that process and write what was on disk back into RAM, and inform the process that the one piece of memory pool has expanded and contains the same page numbers that used to be elsewhere.

and that's why it's a PAGEfile. And Linux calls it SWAP. It is because you're swapping pages from ram to disk and back again.

There IS a benefit and use for it in these modern times, but it doesn't need to be "system managed" , else you'll end up with a minimum of 1.5x installed ram. 256mb is plenty enough for it to do what it needs to do.

And.. there's still some REALLY old applications here and there that just DEMAND that there be virtual memory, and a tiny pagefile seems to satisfy that demand. Win-win.


So there's my anecdotal story. Don't know if that answers your question or not.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 2000715 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2000718 - Posted: 3 Jul 2019, 5:31:46 UTC

Very good anecdotal story Cosmic. I can't say I've experienced the exact same symptoms since the hosts don't do anything but crunch normally. The host that would/should display similar symptoms would be my daily driver since it does a lot of other things besides crunch. I think my normal uptime between reboots is somewhere around 3-4 weeks before some update begs for a reboot. I have not experienced any weirdness so far and now that I occasionally look for any movement on the swap file, still haven't seen any.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2000718 · Report as offensive
Profile Cliff Harding
Volunteer tester
Avatar

Send message
Joined: 18 Aug 99
Posts: 1432
Credit: 110,967,840
RAC: 67
United States
Message 2001794 - Posted: 9 Jul 2019, 10:29:51 UTC

Looking at the conversations on this, I decided to change the swap/page file from 'system managed' to a min of 800MB ( btw, is the minimum that windoze allows w/o screaming that if there is a system error that causes a dump, like BSOD, there may not be enough space) and a max of 1000MB. Should be enough for the OS to get by on.


I don't buy computers, I build them!!
ID: 2001794 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 2002332 - Posted: 13 Jul 2019, 0:07:51 UTC
Last modified: 13 Jul 2019, 0:17:45 UTC

Well, interesting application for page file... but 2 questions arise:
1) Page file can contain only moveable pages, not pinned ones. So, how such defrag could ever work when pinned memory becomes fragmented? (actually, can't at all).
2) Not pinned memory accessible as virtual pages. That is physical pages are mapped into processes virtual address spaces. And actually nothing prevents to move physical pages around still maintaining same maping to address spaces of processes, doing "in memory" defrag. Why whould Windows need page file then?...
In short: pinned memory can't be swapped out at all and moveable can be relocated between physical pages in memory.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 2002332 · Report as offensive

Message boards : Number crunching : SETI & the system page file?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.