Hi Chris,
This really clears things up a lot.
Cool.
I didn't realise nconfig_write was the number of configurations - I thought it was related to checkpointing and later continuation into dmc.
Better make sure your previous optimizations did actually work then - run the 'envmc' utility in the directory to see the results of the optimizations. But wait, you can't because..
I was using casino2.6 because it's the most up-to-date version installed on HECTOR and since HECTOR is being switched off next week and since I'm only playing around I thought not to bother compiling my own.
See, I didn't even know HECTOR had a centrally-installed version of CASINO (and I use HECTOR myself). God knows where they got CASINO 2.6 from - I didn't realize the computer was that old! Googling reveals:
http://www.hector.ac.uk/support/documen ... re/casino/ - whoah! keep your teeth in, Grandpa..
This is an ideal example of why it says the following in the installation instructions (see CASINO/README_INSTALL, or on the
http://vallico.net/casinoqmc/how-to-install/ page, and question A6 on the FAQ.)
"
Note for sysadmins: CASINO is not currently designed to be installed
system-wide by the root user; rather, a separate copy should be installed by
the user under his or her home directory. Amongst other reasons, this is
because the CASINO distribution contains a huge number of utilities (with
large numbers of executable files and scripts which most users of a multi-user
machine will not require) along with examples and documentation which the user
will wish to access."
The main point is that almost all sysadmins believe that 'installing a program' means compiling a single binary executable and sticking it in a directory somewhere. They ignore the fact that CASINO comes as a distribution with loads of tools and other bits and pieces that need to be provided as a whole. And they hardly ever get the idea of the runqmc script. We've written things so you can just type essentially the same command (e.g.
runqmc -p 120 -T3h -s )on any computer in the world, and it just works. Forget about batch scripts and loading and unloading modules and qsubbing. The runqmc utility writes the batch script itself and submits it for you, as well as checking everything for errors, cleaning up etc. Providing the machine has been set up properly, it knows the time limits on particular queues and stuff like that; it handles the shared memory/OpenMP stuff required in the batch script that you might easily forget. You'll thank God for it on complex machines like Blue Gene/Qs. But the sysadmins don't think you need it, so you just get the CASINO binary executable (which of course won't include Shm or Openmp or OpenmpShm support, as these require different executables) and a stupid standard batch script that won't actually work in most cases. Sigh, I don't know why we bother.
But I take your point completely about the large improvements made since 2.6 and will compile 2.12 tonight. Future production runs on Archer will be casino2.12+
You need to install the CASINO current_beta version (the only one to support Archer) - this will become the official CASINO 2.14 distribution sometime in the next couple of weeks. Don't bother with the supposedly official 2.12.1 - this is verging on obsolescence already (things evolve very fast these days..)
To install on Archer, use the 'Auto detect' option of the install script, accept the three suggested CASINO_ARCHs with an 'archer' suffix, sort them into an order of preference - the Gnu compiler is best - using the [s] option, save your configuration using the [q] option, source ~/.bashrc, and you're done.
To compile, remember that on a machine like this (and indeed on most modern multicore machines) you'll be wanting to run the code in Shm shared memory mode (see chapter 38 of the current_beta manual. ( '1:Shm' in the [c] compile mode of the install script).
Run it with the runqmc script, having first typed 'runqmc --help' to see what options you have (this will change depending on the machine, and includes special machine-specific options e.g. on Archer there is a special flag to use the large memory nodes ). The -s flag should be used to run the shared memory executable.
Any problems, let me know.
I get the error "Bad reblock convergence. Too few data points? Standard error in standard error..." after each reblocked energy is written and I thought this could be prevented by increasing nblock
I'm aware that the on-the-fly reblocking output lacks clarity (amongst other problems); funnily enough the thing I'm doing right now is rewriting that bit of the code in preparation for the new release. If only people wouldn't keep asking silly questions on the forum I might actually have finished by now.. (joke!)
