Roy and Niels

Roy and Niels

Wednesday, November 21, 2012

SHIELD-HIT12A demo version released

Here is a little sneak-preview of the upcoming release of SHIELD-HIT12A.

Pimp my Niva... (artist's impression of SHIELD-HIT12A).

SHIELD-HIT12A is a Monte Carlo particle transport code capable of transporting heavy ions through arbitrary media. The -A fork was made in 2010 from SHIELD-HIT08, and since then we have added plenty of new features, solved many bugs, increased calculation speed and optimized the nuclear models to new data on carbon-12 fragmentation.

A free demo version where the random seed and statistics are fixed to 10.000 particles can be downloaded from the project development page. There are builds for Linux and Windows systems (32- and 64 bit). It is a beta-release, but we would like to bring this demo version to a broader community, and thereby hopefully also fix some more bugs before we release the full version.

The new features in SHIELD-HIT12A are:
  • New simplified material and beam parameter parser in free format and extensible without breaking downward compability.
  • Including 279 ICRU default materials and elements, it has never been so easy to specify a material.
  • New beam model: divergence and focus distance can now be specified (thanks to Uli Weber from Marburg).
  • Arbitrary starting beam directions now possible.
  • New routine for Vavilov straggling, 5-6 times faster than the original one by Rotondi and Montagna which was used in Geant3.21. In total, this alone means a speed improvement of roughly 30-40%.
  • Ripple filter has two modes of operation, Monte Carlo type or Modulus type.
  • Logarithmic energy binning in SPC files for TRiP
  • Full howto for generating DDD files for TRiP
  • Now only three input files are needed to setup a run.
  • Improved documentation.
  • Scoring by zones using detect.dat (complementary to Cartesian mesh and cylindrical scoring)
  • Alanine response model included, so SHIELD-HIT12A can directly calculate the dose equivalent response in alanine.
  • Flat circular and square beams can be defined
  • Neutron data for natural Argon was added (needed for detailed simulations of air)
  • Another ton of bug fixes.
Of course SHIELD-HIT12A includes the features from SHIELD-HIT10A (which was never released):
  • Totally new (parallelizable) scoring system:
    • Arbitrary Mesh and Cylindrical scoring
    • Lots of detectors such as energy, fluence, dose-averaged LET, track-averaged LET, average velocity (beta), dose to medium (where medium can be changed if you want to calculate stopping power ratios) etc....
  • finally SHIELD-HIT10A is parallelizable
  • New random number generator, which gives a massive performance boost
  • New adjusted inelastic cross sections for carbon ions based on recent data
  • Fine tuning of the fermi-breakup parameters
  • SHIELD-HIT10A can be configured without accessing the source code anymore, so no programming knowledge required to use SHIELD-HIT10A.
  • SHIELD-HIT10A is installable
  • Runs on linux again, even when compiling with code optimizations, ok with GNU gfortran, Intel and Portland compilers.
  • Many bug fixes
Enjoy! And please drop me a line when you find bugs in the software and errors in the manual, they are there, but we hid them well. :o)


Friday, November 16, 2012

Medical physics papers on the arXiv: Some stats

Last year I made a post about the state of open access in medical physics and the use of to make medical physics papers freely available to everyone. Today I've decided to follow that up by taking a look at what is going on over at and sorting through some of their data.

The arXiv is "an openly accessible, moderated repository for scholarly articles in specific scientific disciplines". Beyond simply allowing all comers to read and download the articles they host, the arXiv also makes information about its vast collection easily available via a set of APIs. I decided to make use of the API to download meta-data (i.e. author, title, abstract, etc) for all articles in the medical physics category. Let's take a look!

First some raw numbers. As of mid-November 2012, the category had (about) 980 articles. Those 980 articles where co-listed in 76 of the possible 126 other arXiv categories along with the medical physics category. In Figure 1 I've plotted the number of submissions by year (with a partial count for 2012). It's clear that the submission rate to has greatly increased. On the plot I've fit a logistic growth curve (Gompertz), as I'm assuming that the submission number will saturate at some point. You can see faint exponential and linear fits as well. The logistic model predicts 174 submissions for 2012, but 140-145 seems more likely this year.

Figure 1. Submissions per year to the category on

Another item of interest is how the medical physics category fits into arXiv. If you've browsed before, you know that it contains a broader range of topics than the popular "medical physics" journals, such as Physics in Medicine and Biology or Medical Physics. As mentioned above, many of the 980 articles were listed in multiple categories (663 to be exact). Figure 2 shows the most popular co-categories for articles in As might be expected, biophysics ( was the most popular co-category, along with topics that seem well aligned, such as instrumentation and detectors (physics.ins-det), organs and tissues (q-bio.TO), and computational physics (physics.comp-ph). But less obvious categories were also co-submitted as well, such as chaotic dynamics (nlin.CD).

Figure 2. "Co-categories" of papers submitted to
To try to get a better idea of how these categories interplay with one another, I made some simple network graph visualizations. Figures 3 and 4 show the connections between the co-categories. All of the papers are clearly in and all other categories are linked to that. The other lines in the network plots represent when a paper was simultaneously in more than two categories (e.g. medical physics, biophysics, and physics - data analysis).

Figure 3. Network graph of categories for papers submitted to Click for larger version.
Figure 4. Network graph of categories for papers submitted to Click to view larger version with category labels on the nodes.
In Figure 4 the region near the center of the graph are the categories that are the most likely to be co-listed with other categories. As you might expect, the categories with a dense set of lines connecting them tend to also be the most frequently occurring categories, as seen in Figure 2. This is more easily seen in the larger version of Figure 4 with category names. (click the above figure).

As you can see the arXiv category as a wide ranging and growing category for open access articles related to medical physics. It will be interesting to see how it fits in with the wider trends of funder mandates for open access and the general growing acceptance and demand for open access in out community.

N.B. This arXiv meta-data is relatively easily pulled down and processed with Python tools using the arXiv API, going from XML to JSON to your computer screen at home!

Thursday, June 21, 2012

libdEdx 1.2.0 released - stopping powers for the masses!

Our master's student Jakob Toftegaard has been very busy lately, and we're now ready to release a new version of the open-source stopping power library libdEdx - version 1.2.0.

Changes to the last version 1.0 are
  • first of all: a new, nice and clean API. This breaks compatibility with 1.0, but we will do our best to avoid this happening again in the future. (We now have extensive use of structs, which can be extended with new members.)
  • generic ICRU table included, which combines ICRU49 and the revised ICRU73.
  • all four calculation modes in MSTAR are now supported, the default is that recommended by Helmut Paul.
  • aggregate state can be specified, following ICRU recommendations
  • I-values can be overridden for analytical functions (BETHE_EXT)
  • provides a bunch of new functions
    • calculate CSDA range
    • inverse range lookup - a given range will return required particle energy in CSDA approximation. (Yes, this is the feature you have been waiting for all your life!)
    • inverse dEdx lookup - a given stopping power will yield an energy (either high or low value, depending on what the user requested)
  • version string of libdEdx can be accessed
  • memory leak fixes
  • typo fixes in material lists
  • code should now be thread-safe
And were just getting started!

A real gem is our new web based frontend at to libdEdx. Here you can lookup stopping power functions using various tables and energies, use it as a supplement to those from NIST. The website includes a nice plotting feature as well, where you can add multiple plots for comparison.

The web frontend is still in beta-testing phase and may reside in this state for a long time. Any feedback is appreciated.

We have a lot of plans regarding how to continue with this. In the next version of libdEdx we plan to include more features such as
  • more Bethe-based stopping power functions such as
  • Bethe-Bloch
  • Bethe-Bloch-Barkas
We also would like to include additional stopping power programs such as ESTAR, ATIMA and SRIM (just the stopping power part, of course), yet the outcome will depend on the willingness of the respective authors to contribute.
  • algorithms for nuclear stopping power
  • … and a surprise which we won’t reveal yet … :-)
Other contributors will be most welcome, the project is available on sourceforge for inspection.

We greatly acknowledge our hero Helmut Paul for contributing to the development with very fruitful discussions and suggestions. We also acknowledge the permission from the ICRU to use their stopping power tables in libdEdx.

Yet, we do not claim that the produced results are correct in any way, so any use of the data are on own risk. Nonetheless, if you DO find discrepancies, errors, misbehaviour of code, we would really appreciate if you tell us.


Friday, June 1, 2012

Don't Mention the War

Last year the Danish government announced that it will finance a national particle therapy facility together with some private funding agencies. Two applications were submitted for evaluation:

and of course there is a strong interest from each side to host this project. Naturally, since I am part of the Aarhus Particle Therapy Group, I will right out admit I am naturally in favour of our project in Aarhus.

The plan is now that an international expert commission is being setup by the ministry of health. That commission will then evaluate those two applications and give a suggestion where to place such a facility.

If you compare those two applications, you will see that they both apply for a 3-gantry treatment facility. Copenhagen lock themselves on a cyclotron solution, where we in Aarhus are also open to a synchrotron based solution.

Basically the differences can be summarized here:

Copenhagen Aarhus
3 treatment rooms with 2 gantries 3 treatment rooms with 2 gantries,
+1 research room
3rd gantry will be installed later 3rd gantry will be installed later,
a fourth treatment room with gantry can be added
CyclotronCyclotron or Synchrotron
1000 patients per year1000 patients per year

So basically what we apply for is rather similar.

What about the costs? If you compare those budgets, you will note that the prices of the equipment and the building are comparable too.

Copenhagen Aarhus
Equipment450 mio. DKK 475 mio. DKK
Building382 mio. DKK 295 mio. DKK
Supply systems and lines195 mio. DKK (incl. in building)
Purchase and demolition of Rockefeller Building147 mio. DKK (none)
Total1174 mio. DKK 770 mio. DKK
Optional 3rd gantry(not mentioned) 75 mio. DKK

Copenhagen need to build in the middle of the city, whereas we in Aarhus have a pristine green field adjacent to the New University Hospital. This means that the expenses for the Copenhagen proposal are significantly higher than the Aarhus solution, since a building has to be demolished first (they have simply a lack of space in Copenhagen). The "supply lines" cover, as far as I heard, establishing extra power for the facility, please correct me if I am wrong here. The annual operational costs are also similar.

There has been a few articles in the Danish press about the two applications. None of the Journalists managed to actually compare the two budgets, so basically the message was that Aarhus offeres particle therapy at dumping prices. Yeah right.

In addition, I hear a lot of weird arguments, e.g. such a facility MUST be build in Copenhagen, since it is the capital city, and it has a proximity to the airport.


How many of the existing facilities are located in capital cities? Lets pick some of the most famous ones: HIT is in Heidelberg (147.000 inhabitants, 1hours drive to next airport). PSI is in Villingen. The Swedish facility is built in Uppsala, and not Stockholm. Why is that so? These research institutions simply had the longest track record in terms of particle therapy! I don't get the point of the closeness of an airport either, since you can reach any point within a few hours in DK by car, and sorry guys... particle therapy is not that acute.

Now, today this conflict took a very curious turn, as Copenhagen announced in Dagens Medicin that they will cut the price 75 % (login required to view link). What is going on?
It seems that Copenhagen suddenly decided to go for a single gantry facility by Mevion, formerly known as StillRiver. To me this looks like a very poor decision. The Mevion facility has been controversial for a long list of reasons. In a scientific article by Schippers and Lomax they list some of the issues with this facility:

  • the very compact design of the cyclotron, may lead to large beam losses and resulting activation. 
  • repetition frequency of synchrocyclotrons are typically 1kHz which limits applicability of the different pencil beam scanning methods.
  • There is no beam analysis and no magnets in the beam path, in other words an energy selection system is missing. This means that the beam will have a poor distal edge, always looking as if it traversed 25-30 cm of material, e.g. when treating head an neck cancers. This limits possible treatment planning techniques, such as patch fields.
  • Neutron contamination may be an issue, due to the proximity of the degrader, modulator and collimator(s). 
To this, I might add that the construction of the prototype is still awaiting FDA approval, and we have not seen any data on how this machine operates. We are just in the middle of a never ending scandal about a couple of diesel IC4 trains ordered from AnsaldoBreda a decade ago, where the trains still are not functional and with a significant risk that they never will be. Billions of taxpayer money are lost here. Most Danes are therefore allergic to order unproven technology abroad.

Update: Mevion just got their FDA approval.

Another update: it's a "Premarket Notification" and not a "New Device Approval". Read it here. Thanks to Klaus Seiersen for pointing this out.

The recent article in Dagens Medicin does not even mention that cutting 75% of the price leads to a single-gantry facility, which means the patient numbers needs to be adjusted down, perhaps by 2/3rds.

Finally, if you need three treatment rooms, then measured in costs per treatment room, a conventional solution is cheaper than the Mevion solution, according to Schippers.

I am very astonished, it seems that Copenhagen just scored an own goal. However, I fear this may delay the decision process even more. In the end this is about patients, and if the money is there, we should not be satisfied with giving our patients the most inferior kind of proton therapy, and rely on unproven technology.

Sunday, January 22, 2012

The Quick and Dirty Guide for Parallelizing FLUKA

(Single PC version)

Imagine you got a desktop or laptop PC with 4 or perhaps even 8 CPU cores available, and you want to run the Monte Carlo particle transport program  FLUKA on it using all CPU cores.
The FLUKA execution script rfluka however was designed to run in "serial" mode. That is, if you request to repeat your simulation a lot of times (say, 100) issuing the command rfluka -N0 -M100 example, each process is launched serially, instead of utilizing all available cores on your PC.

A solution can be to use a job queuing system and a scheduler. Here, I'll present one way to do it on a Debian based Linux system. Ubuntu might work just as well, since Ubuntu is very similar to Debian. A feature of the method presented here, is that it can easily be extended to cover several PCs on your network, so you can use the computing power of your colleagues when they do not use their PCs (e.g. at night). However, this post will try to make it very simple, namely set it just on your own PC. In less than 10 minutes you'll have it up and running...

The idea is to use TORQUE in a very minimal configuration. There will be no fuzz with Maui or similar schedulers, we will only use packages we can get from the Debian/Ubuntu software repositories.
In order to be friendly to all the Ubuntu users out there, all commands issued as root are here prefixed with the "sudo" command. As a Debian user you can become root using the "su" command first.

First install these packages:

$ sudo apt-get install torque-server torque-scheduler 
$ sudo apt-get install torque-common torque-mom libtorque2
and either
$ sudo apt-get install torque-client
$ sudo apt-get install torque-client-x11

after installation we need to setup torque properly. I here assume that your PC hostname cannot be resolved by DNS, which is quite common on small local networks. You can test whether your hostname can be resolved by the "host" command. Assuming your PC has the name "kepler", you may get an answer like:

$ host $HOSTNAME
Host kepler not found: 3(NXDOMAIN)

this means you may need to edit the /etc/hosts file, so your PC can associate an IP number with your hostname. Debian like distros may have a propensity to assign the hostname to which will not work with torque. Instead I looked up my IP number (which in my case is pretty static) using /sbin/ifconfig, and edited the /etc/hosts accordingly, using your favourite text editor (emacs, gedit, vi...)
My /etc/hosts file ended up looking like this: localhost
# kepler.lan kepler   kepler

If your hostname of your PC can be resolved, you can ommit the last line, but under all circumstances you must comment out the line starting with

Once this is done, execute the following commands to configure torque:
$ sudo echo $HOSTNAME > /etc/torque/server_name
$ sudo echo $HOSTNAME > /var/spool/torque/server_name
$ sudo pbs_server -t create
$ sudo echo $HOSTNAME np=`grep proc /proc/cpuinfo | wc -l` > /var/spool/torque/server_priv/nodes 
$ sudo qterm
$ sudo pbs_server
$ sudo pbs_mom

(Update: If qterm fails, you probably have a problem with your /etc/hosts file. You can still kill the server with $killall -r "pbs_*".)

Now let's  see if things are running as expected:
$ pbsnodes -a
     state = free
     np = 4
     ntype = cluster
     status = rectime=1326926041,varattr=,jobs=,state=free,netload=3304768553,gres=,loadave=0.09,ncpus=4,physmem=3988892kb,availmem=6643852kb,totmem=7876584kb,idletime=2518,nusers=2,nsessions=8,sessions=1183 1760 2170 2271 2513 15794 16067 16607,uname=Linux kepler 3.1.0-1-amd64 #1 SMP Tue Jan 10 05:01:58 UTC 2012 x86_64,opsys=linux

and also
$sudo momctl -d 0 -h $HOSTNAME

Host: kepler/kepler   Version: 2.4.16   PID: 16835
Server[0]: kepler (
  Last Msg From Server:   279 seconds (CLUSTER_ADDRS)
  Last Msg To Server:     9 seconds
HomeDirectory:          /var/spool/torque/mom_priv
MOM active:             280 seconds
LogLevel:               0 (use SIGUSR1/SIGUSR2 to adjust)
NOTE:  no local jobs detected

Now setup a queue, which here is called "batch".
$ sudo qmgr -c 'create queue batch'
$ sudo qmgr -c 'set queue batch queue_type = Execution'
$ sudo qmgr -c 'set queue batch resources_default.nodes = 1'
$ sudo qmgr -c 'set queue batch resources_default.walltime = 01:00:00'
$ sudo qmgr -c 'set queue batch enabled = True'
$ sudo qmgr -c 'set queue batch started = True'
$ sudo qmgr -c 'set server default_queue = batch'
$ sudo qmgr -c 'set server scheduling = True'

[update: you may want to increase walltime to 10:00:00 so jobs dont stop after 1 hour]

and start the scheduler:
$ sudo pbs_sched

The rest of the commands can be issued as a normal user (i.e. non-root).

Let's see if all servers are running:
$ ps -e | grep pbs
 1286 ?        00:00:00 pbs_mom
 1293 ?        00:00:00 pbs_server
 2174 ?        00:00:00 pbs_sched

Anything in the queue?
$ qstat
Nope, it's empty.

Lets try to submit a simple job
echo "sleep 20" | qsub

and within the next 20 seconds you can test, if its in the queue:
$ qstat
Job id                    Name             User            Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
0.kepler                 STDIN            bassler                0 R batch

Great, now were ready to rock 'n roll! This is really a minimalistic setup, which just works. For more bells and whistles, check the torque manual.

All we need, is a simple FLUKA job submission script:
# how to use this
# change to directory with the files you want to run
# and enter:
# $ qsub -V -t 0-9 -d .
let stop="$start+1"
stop_pad=`printf "%03i\n" $stop`
# Init new random number sequence for each calculation. 
# This may be a poor solution.
cp $FLUPRO/random.dat ranexample$stop_pad
sed -i '/RANDOMIZE        1.0/c\RANDOMIZE        1.0 '"${RANDOM}"'.0 \' example.inp
$FLUPRO/flutil/rfluka -N$start -M$stop example -e flukadpm3

Update: Note that your RANDOMIZE card in your own .inp file must match the sed regular expression above, else you may repeat the exact same simulation over and over again...

Let's submit 10 jobs:
$ qsub -V -t 0-9 -d .

And watch the blinkenlichts.
$ qstat
Job id                    Name             User            Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
15-0.kepler               FLUKA_JOB-0      bassler                0 R batch          
15-1.kepler               FLUKA_JOB-1      bassler                0 R batch          
15-2.kepler               FLUKA_JOB-2      bassler                0 R batch          
15-3.kepler               FLUKA_JOB-3      bassler                0 R batch          
15-4.kepler               FLUKA_JOB-4      bassler                0 Q batch          
15-5.kepler               FLUKA_JOB-5      bassler                0 Q batch          
15-6.kepler               FLUKA_JOB-6      bassler                0 Q batch          
15-7.kepler               FLUKA_JOB-7      bassler                0 Q batch          
15-8.kepler               FLUKA_JOB-8      bassler                0 Q batch          
15-9.kepler               FLUKA_JOB-9      bassler                0 Q batch 

Surely, this can be improved a lot, suggestions are most welcome in the comments below. One problem is for instance, that the random number seed is limited to a 16 bit integer, which only covers a very small fraction of the possible seeds for the RANDOMIZE card.
Update: There is also a very small risk that the same seed occasionally is used twice (or more often). Alternatively one could just add a random number to a starting seed after each run. (Any MC random number experts out there?)

Output data can be processed in regular ways, using flair
Alternatively you may use some of the scripts in the auflukatools package, which for instance can do the merging of USRBIN output with a single command. Auflukatools also includes as well as a CONDOR job submission script, which is better suited for heterogenous clusters.

Finally, here is a job script for SHIELD_HITxxA, (which is even shorter):

# how to use
# change to directory you want to run
# $ qsub -V -t 0-9 -d .
shield_exe  -N$PBS_ARRAYID


Totally unrelated: just posted some nice pics from the Budker institute for Nuclear Physics in Novosibirsk, Russia. Certainly worth visiting, have a look at:
 :-) Heaps of pioneering accelerator technology was developed there, such as electron cooling, the first collider, lithium lenses (e.g. for capturing antiprotons), and they supplied the conventional magnets for the beam transfer lines to the LHC at CERN. I visited the center many years ago but my pics are not as good. :-/ The German wiki about Budker himself, is also worth reading.

Wednesday, January 11, 2012

Visit at the Primary Standards Laboratory in Slovakia

This post is not related to computing, but more to medical physics. Primary standards dosimetry laboratories (PSDL) are important for medical physicists, since they define fundamental quantities such as dose. If you buy some dosimeter, say, an ionization chamber, it is most likely calibrated at a PSDL (for ample amounts of money) or a secondary standards dosimetry laboratory (SSDL) which is linked towards a PSDL. Not all countries have a PSDL or SSDL, and some countries (like Slovakia in this case) have both PSDL and a SSDL facilities. To my knowledge, Denmark quite recently got the SSDL status at the National Institute for Radioprotection.

During summer vacation 2011, after finishing the run at CERN, I had a rather messy tour across Europe and also went to Bratislava to visit some friends. Here I got the chance to get a tour of the Slovakian PSDL at the Metrological (not to mistaken with the Meteological) Institute. It is my second vist at a PSDL - a few years ago, I visited the PSDL at the National Physical Laboratory in the UK, but I only got crappy mobile phone pictures.

The Metrological Institute of Slovakia, Bratislava.

The director of the institute, Jozef Dobrovodsky gave me a tour of the facility. They have a close cooperation with the NPL - most physicists working with particle therapy may have heard of proton dosimetry expert Hugo Palmans who works at NPL near London, but (quite conveniently) actually lives in Bratislava.

Jozef and Hugo looking at Roos plane parallel ionization chambers from PTW, well suited for measuring depth-dose curves of pencil shaped ion beams.
Outline of the facility.
The facility has a Betatron, a Cobalt-60 unit, a 320 kV X-ray unit, a Caesium-137 irradiator and a neutron vault with various neutron sources.

Mock-up models of Cs-137 sources (these are NOT radioactive).
A part of the control room, with the very well known UNIDOS electrometer by PTW, which I worked a lot with e.g. while at CERN.
Probably the most important room is where the Co-60 irradiator is kept. Co-60 has a long history serving as reference radiation for a wide range of dosimetric tasks. Beam quality is usualy expressed relative to Co-60 standard. However, Co-60 irradiators are getting rare. Radiation treatment with Co-60 is rather something seen in developing countries, at most hospitals they were replaced with megavolt linear accelertors, also for safety issues. (Messing with radioactive sources is always a bad thing and should be avoided). As a researcher, it gets increasingly difficult to get access to a proper reference radiation.

The yellow box holds the Co-60 source, behind the tank an additional collimator is visible which can be mounted in front of the Co-60 unit.

Co-60 irradiation room. The tank holds a Markus ionization chamber, and the dose-rate can be reduced by increasing the distance to the Co-60 irradiation unit.
Next we took a look at the X-ray irradiation room. X-rays have lower energies than Co-60 and are made electrically and not from a radioactive source.
Two X-ray sources are seen here, one in the background with wheels of various copper filers which can be positioned in the beam. Copper filters can remove characteristic lines of the X-ray spectrum, thereby flattening it. In front an X-ray diagnostic device is visible.
How to make 90 Volts. :)
Cs-137 provide a photon field at about half the energy of the MeV Co-60 photons.
Cs-137 irradiation room.
Cs-137 irradiator seen from the front. Aperture can slide to the right, exposing the room to the source.
A real gem was their Betatron: it's a Czech construction, which can deliver both electron and photon beams. The betatron is an old design originally invented by the Norwegian Wider√łe, who also invented the idea of drift tubes, widely used at almost any accelerator today. Betatrons (especially functioning betatrons) are a very rare sight today,  most were replaced with LINACs long time ago. I once saw a betatron at the physics department of Freiburg in Germany, but it was not operational anymore. This one however is still functioning! (Look how clean and tidy it is... I am used to messy laboratories.)

Second time I ever see a betatron. Yay! :-)
You can extract either photons or electrons on either side. This is the photon exit (I think).
They even had a spare betatron tube, heavily tarnished by radiation damage to the glass.
Control console for the betatron. Nice and sleek.
Power supply and controls for the betatron. Many components are still genuine Czech, manufactured by TESLA.
Finally we visited the neutron vault. Here they had three neutron irradiators: two different accelerator based sources and a range of radioactive neutron emitters. The neutron sources were kept in a cave in the floor shielded with lots of plastic material for neutron moderation/absorption. The sources they have are quite common. Some intense alpha source (Plutonium or Americium) mixed with some light material (Beryllium). A Californium source was also there which fissions spontaneously.

Neutron vault. In the floor several neutron sources are kept, and can be raised out of the cave by the visible holder. I was a bit hesitant, when the scientist suddenly pulled a string, and the holder surfaced out of the neutron cave, as shown on this picture. “Is it empty?!” “Sure it's empty.”
This accelerator was very cute: protons accelerated towards a tritium target produces a neutron beam. The design of the high voltage terminal looks very much like the design found in the terminal of our Van de Graaf KN machine in Aarhus.
The ion source can be seen in the middle.
Beam is directed against a tritium metal hydride target, which is rotated to redistribute the dissipated power over a larger area. This produces a neutron beam, exiting to the lower left.

I always found acceletor technology very interesting, especially old designs where you easily can recognize what is going on (or not). If it is eastern-european design, it's just even more interesting, since they tend to look rather different and often show signs of various improvisations.

This is some Russian accelerator based neutron source. However, it was not really used if I remember correctly, and unfortunately I didn't pick up all the details about it.
I was once told that its very characteristic for Russian accelerator systems, that the vacuum tubes are fixed with 4 screws only.

This concludes our little tour at the irradiation facilities of the primary standard lab in Bratislava. Thanks to Hugo Palmans and Jozef Dobrovodsky for the tour!

An antiproton and a proton dosimetry researcher meet. No annihilation, but some sort of a bound state, clearly sharing common goals and interests.